Technical Debt and it Types Datasets
Collection
24 items
•
Updated
•
1
Unnamed: 0
int64 9
832k
| id
float64 2.5B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 4
323
| labels
stringlengths 4
2.67k
| body
stringlengths 23
107k
| index
stringclasses 4
values | text_combine
stringlengths 96
107k
| label
stringclasses 2
values | text
stringlengths 96
56.1k
| binary_label
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
546 | 8,551,946,557 | IssuesEvent | 2018-11-07 19:32:52 | Microsoft/VFSForGit | https://api.github.com/repos/Microsoft/VFSForGit | opened | post-fetch: Exception while running post-fetch job: Illegal characters in path. | MountReliability | So far I'm only seeing 1 user affected by this, but it affected 3 of their enlistments
time 2018-11-07 03:25:05.3436328
MountIds:
a3ad84a5497f4002a1a9fbe8c45d225f
5b6232a0f62a42278ba6cf0c9eb58aa3
9c5d6b418003410eb1a280124b578df7
| True | post-fetch: Exception while running post-fetch job: Illegal characters in path. - So far I'm only seeing 1 user affected by this, but it affected 3 of their enlistments
time 2018-11-07 03:25:05.3436328
MountIds:
a3ad84a5497f4002a1a9fbe8c45d225f
5b6232a0f62a42278ba6cf0c9eb58aa3
9c5d6b418003410eb1a280124b578df7
| reli | post fetch exception while running post fetch job illegal characters in path so far i m only seeing user affected by this but it affected of their enlistments time mountids | 1 |
297,038 | 9,159,833,153 | IssuesEvent | 2019-03-01 04:29:50 | SocialMediaExchange/muhal | https://api.github.com/repos/SocialMediaExchange/muhal | closed | The answer for هل Ø§ØØªØ¬Ø²/Ø§ØØªØ¬Ø²ØªØŸ does not match the airtable. | bug priority | In the Cases of Mustafa Sbeity and Tima Hayek, at least, we have noticed the answer to this question is N/A when the answer is نعم in the airtable. This doesn't seem to be a problem for the English, just the Arabic. | 1.0 | The answer for هل Ø§ØØªØ¬Ø²/Ø§ØØªØ¬Ø²ØªØŸ does not match the airtable. - In the Cases of Mustafa Sbeity and Tima Hayek, at least, we have noticed the answer to this question is N/A when the answer is نعم in the airtable. This doesn't seem to be a problem for the English, just the Arabic. | non_reli | the answer for هل Ø§ØØªØ¬Ø² Ø§ØØªØ¬Ø²ØªØŸ does not match the airtable in the cases of mustafa sbeity and tima hayek at least we have noticed the answer to this question is n a when the answer is نعم in the airtable this doesn t seem to be a problem for the english just the arabic | 0 |
27,828 | 6,905,483,378 | IssuesEvent | 2017-11-27 07:21:14 | BTDF/CodePlexDiscussions | https://api.github.com/repos/BTDF/CodePlexDiscussions | opened | Discussion:
Error on undeploy when app doesn | CodePlexMigrated | <b>majikandy[11/15/2017 6:03:54 PM]</b> <br />Hi,
The 5.7 version appears to have an additional stop condition on undeploy when the app doesn't exist.
This isn't too much of a problem locally. But on a build server it is a problem. This is because first time deploy will always error if we are running undeploy every time as a first step.
In version 5.0, always running undeploy first was fine because this error didn't happen as PrepareAppForUndeploy didn't have the error text line below....
<Target Name=PrepareAppForUndeploy DependsOnTargets=VerifyBizTalkAppExists>
<Error Text=BizTalk application '$(BizTalkAppName)' does not exist in the group, so there is nothing to do.
Condition='$(AppExists)' == 'false' and '$(DeploymentMode)' == 'Undeploy' />
...
For workarounds I can think of a couple of options:-
Hack the targets file to remove that Error Text line on the build agent Call VerifyBizTalkAppExists in the line before calling Undeploy from the build server and only run undeploy if it exists.
It would be nice if there was a parameter I could pass in like SkipAppNotExistsError eg.
Condition='$(SkipAppNotExistsError) == 'false' and $(AppExists)' == 'false' and '$(DeploymentMode)' == 'Undeploy' />
Or a better solution if you have one?
Many thanks
Andy
| 1.0 | Discussion:
Error on undeploy when app doesn - <b>majikandy[11/15/2017 6:03:54 PM]</b> <br />Hi,
The 5.7 version appears to have an additional stop condition on undeploy when the app doesn't exist.
This isn't too much of a problem locally. But on a build server it is a problem. This is because first time deploy will always error if we are running undeploy every time as a first step.
In version 5.0, always running undeploy first was fine because this error didn't happen as PrepareAppForUndeploy didn't have the error text line below....
<Target Name=PrepareAppForUndeploy DependsOnTargets=VerifyBizTalkAppExists>
<Error Text=BizTalk application '$(BizTalkAppName)' does not exist in the group, so there is nothing to do.
Condition='$(AppExists)' == 'false' and '$(DeploymentMode)' == 'Undeploy' />
...
For workarounds I can think of a couple of options:-
Hack the targets file to remove that Error Text line on the build agent Call VerifyBizTalkAppExists in the line before calling Undeploy from the build server and only run undeploy if it exists.
It would be nice if there was a parameter I could pass in like SkipAppNotExistsError eg.
Condition='$(SkipAppNotExistsError) == 'false' and $(AppExists)' == 'false' and '$(DeploymentMode)' == 'Undeploy' />
Or a better solution if you have one?
Many thanks
Andy
| non_reli | discussion error on undeploy when app doesn majikandy hi the version appears to have an additional stop condition on undeploy when the app doesn t exist this isn t too much of a problem locally but on a build server it is a problem this is because first time deploy will always error if we are running undeploy every time as a first step in version always running undeploy first was fine because this error didn t happen as prepareappforundeploy didn t have the error text line below error text biztalk application biztalkappname does not exist in the group so there is nothing to do condition appexists false and deploymentmode undeploy for workarounds i can think of a couple of options hack the targets file to remove that error text line on the build agent call verifybiztalkappexists in the line before calling undeploy from the build server and only run undeploy if it exists it would be nice if there was a parameter i could pass in like skipappnotexistserror eg condition skipappnotexistserror false and appexists false and deploymentmode undeploy or a better solution if you have one many thanks andy | 0 |
2,517 | 26,005,403,706 | IssuesEvent | 2022-12-20 18:51:44 | StormSurgeLive/asgs | https://api.github.com/repos/StormSurgeLive/asgs | closed | make `asgs_main.sh` exit with an error if `$SCRIPTDIR` is different from the `$SCRIPTDIR` in the `$STATEFILE` | important non-critical reliability | This can happen if you are running an ASGS instance in one installation (e.g., an issue-specific installation directory), stop it and try to restart it in another installation directory (e.g., production). | True | make `asgs_main.sh` exit with an error if `$SCRIPTDIR` is different from the `$SCRIPTDIR` in the `$STATEFILE` - This can happen if you are running an ASGS instance in one installation (e.g., an issue-specific installation directory), stop it and try to restart it in another installation directory (e.g., production). | reli | make asgs main sh exit with an error if scriptdir is different from the scriptdir in the statefile this can happen if you are running an asgs instance in one installation e g an issue specific installation directory stop it and try to restart it in another installation directory e g production | 1 |
1,488 | 16,545,144,386 | IssuesEvent | 2021-05-27 22:34:21 | argoproj/argo-workflows | https://api.github.com/repos/argoproj/argo-workflows | closed | Rate-limiting pod creation | enhancement epic/reliability epic/scaling | Creating 1000s of pods floods and overloads Kubernetes. We should be able to control the rate at which we create resources (and therefore pods). | True | Rate-limiting pod creation - Creating 1000s of pods floods and overloads Kubernetes. We should be able to control the rate at which we create resources (and therefore pods). | reli | rate limiting pod creation creating of pods floods and overloads kubernetes we should be able to control the rate at which we create resources and therefore pods | 1 |
229,208 | 18,286,661,483 | IssuesEvent | 2021-10-05 11:04:20 | DILCISBoard/eark-ip-test-corpus | https://api.github.com/repos/DILCISBoard/eark-ip-test-corpus | closed | CSIP72 Test Case Description | test case corpus package | **Specification:**
- **Name:** E-ARK CSIP
- **Version:** 2.0-DRAFT
- **URL:** http://earkcsip.dilcis.eu/
**Requirement:**
- **Id:** CSIP72
- **Link:** http://earkcsip.dilcis.eu/#CSIP72
**Error Level:** ERROR
**Description:**
CSIP72 | File checksum type fileSec/fileGrp/file/@CHECKSUMTYPE | The type of checksum following the value list in the standard which used for the linked file. | 1..1 MUST
-- | -- | -- | --
| 1.0 | CSIP72 Test Case Description - **Specification:**
- **Name:** E-ARK CSIP
- **Version:** 2.0-DRAFT
- **URL:** http://earkcsip.dilcis.eu/
**Requirement:**
- **Id:** CSIP72
- **Link:** http://earkcsip.dilcis.eu/#CSIP72
**Error Level:** ERROR
**Description:**
CSIP72 | File checksum type fileSec/fileGrp/file/@CHECKSUMTYPE | The type of checksum following the value list in the standard which used for the linked file. | 1..1 MUST
-- | -- | -- | --
| non_reli | test case description specification name e ark csip version draft url requirement id link error level error description file checksum type filesec filegrp file checksumtype the type of checksum following the value list in the standard which used for the linked file must | 0 |
694,891 | 23,835,148,077 | IssuesEvent | 2022-09-06 04:40:10 | googleapis/google-cloud-go | https://api.github.com/repos/googleapis/google-cloud-go | closed | spansql : create view gives an error : unexpected token "c" | api: spanner priority: p3 | **Client**
golang spansql v1.32.0
**Go Environment**
$ go version 1.69.9
the sql
CREATE VIEW
resourceconfiguration_view SQL SECURITY INVOKER AS
SELECT c.id,r.id
FROM resourceconfiguration c
LEFT JOIN sa_service_runners r
ON r.id = c.impersonation;
The problem is c.id in the select statement . if you don't use c.id, it works , but you will need this if the tables have the same column names
we got following error
spansql_test.go:101: ParseDDL("CREATE VIEW\n resourceconfiguration_view SQL SECURITY INVOKER AS\nSELECT c.id,r.id\n FROM resourceconfiguration c\n LEFT JOIN sa_service_runners r\n ON r.id = c.impersonation;"): filename:4: unexpected token "c"
| 1.0 | spansql : create view gives an error : unexpected token "c" - **Client**
golang spansql v1.32.0
**Go Environment**
$ go version 1.69.9
the sql
CREATE VIEW
resourceconfiguration_view SQL SECURITY INVOKER AS
SELECT c.id,r.id
FROM resourceconfiguration c
LEFT JOIN sa_service_runners r
ON r.id = c.impersonation;
The problem is c.id in the select statement . if you don't use c.id, it works , but you will need this if the tables have the same column names
we got following error
spansql_test.go:101: ParseDDL("CREATE VIEW\n resourceconfiguration_view SQL SECURITY INVOKER AS\nSELECT c.id,r.id\n FROM resourceconfiguration c\n LEFT JOIN sa_service_runners r\n ON r.id = c.impersonation;"): filename:4: unexpected token "c"
| non_reli | spansql create view gives an error unexpected token c client golang spansql go environment go version the sql create view resourceconfiguration view sql security invoker as select c id r id from resourceconfiguration c left join sa service runners r on r id c impersonation the problem is c id in the select statement if you don t use c id it works but you will need this if the tables have the same column names we got following error spansql test go parseddl create view n resourceconfiguration view sql security invoker as nselect c id r id n from resourceconfiguration c n left join sa service runners r n on r id c impersonation filename unexpected token c | 0 |
262 | 5,841,373,735 | IssuesEvent | 2017-05-10 00:37:01 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Creating C# WPF apps crash VS with ArgumentNullException | Area-IDE Bug Tenet-Reliability | **Version Used**:
VSUML 26507.1
**Steps to Reproduce**:
1. Install VS with .NET Desktop Development
2. Create a new C# WPF App
**Expected Behavior**:
Project creates successfully and you are able to work on it.
**Actual Behavior**:
Project creates followed by crashing VS.
Exception:
```
CLR: Managed code called FailFast, saying "System.ArgumentNullException: Value cannot be null.
Parameter name: key
at System.Runtime.CompilerServices.ConditionalWeakTable`2.TryGetValue(TKey key, TValue& value)
at System.Runtime.CompilerServices.ConditionalWeakTable`2.GetValue(TKey key, CreateValueCallback createValueCallback)
at Microsoft.CodeAnalysis.Serialization.ChecksumCache.GetOrCreate(Object value, CreateValueCallback checksumCreator)
at Microsoft.CodeAnalysis.FindSymbols.SyntaxTreeIndex.<GetChecksumAsync>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.FindSymbols.SyntaxTreeIndex.<PrecalculateAsync>d__51.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.SolutionCrawler.SolutionCrawlerRegistrationService.WorkCoordinator.IncrementalAnalyzerProcessor.<>c__DisplayClass31_1`1.<<RunAnalyzersAsync>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.SolutionCrawler.SolutionCrawlerRegistrationService.WorkCoordinator.IncrementalAnalyzerProcessor.<GetOrDefaultAsync>d__33`2.MoveNext()"
```
| True | Creating C# WPF apps crash VS with ArgumentNullException - **Version Used**:
VSUML 26507.1
**Steps to Reproduce**:
1. Install VS with .NET Desktop Development
2. Create a new C# WPF App
**Expected Behavior**:
Project creates successfully and you are able to work on it.
**Actual Behavior**:
Project creates followed by crashing VS.
Exception:
```
CLR: Managed code called FailFast, saying "System.ArgumentNullException: Value cannot be null.
Parameter name: key
at System.Runtime.CompilerServices.ConditionalWeakTable`2.TryGetValue(TKey key, TValue& value)
at System.Runtime.CompilerServices.ConditionalWeakTable`2.GetValue(TKey key, CreateValueCallback createValueCallback)
at Microsoft.CodeAnalysis.Serialization.ChecksumCache.GetOrCreate(Object value, CreateValueCallback checksumCreator)
at Microsoft.CodeAnalysis.FindSymbols.SyntaxTreeIndex.<GetChecksumAsync>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.FindSymbols.SyntaxTreeIndex.<PrecalculateAsync>d__51.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.SolutionCrawler.SolutionCrawlerRegistrationService.WorkCoordinator.IncrementalAnalyzerProcessor.<>c__DisplayClass31_1`1.<<RunAnalyzersAsync>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.CodeAnalysis.SolutionCrawler.SolutionCrawlerRegistrationService.WorkCoordinator.IncrementalAnalyzerProcessor.<GetOrDefaultAsync>d__33`2.MoveNext()"
```
| reli | creating c wpf apps crash vs with argumentnullexception version used vsuml steps to reproduce install vs with net desktop development create a new c wpf app expected behavior project creates successfully and you are able to work on it actual behavior project creates followed by crashing vs exception clr managed code called failfast saying system argumentnullexception value cannot be null parameter name key at system runtime compilerservices conditionalweaktable trygetvalue tkey key tvalue value at system runtime compilerservices conditionalweaktable getvalue tkey key createvaluecallback createvaluecallback at microsoft codeanalysis serialization checksumcache getorcreate object value createvaluecallback checksumcreator at microsoft codeanalysis findsymbols syntaxtreeindex d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis findsymbols syntaxtreeindex d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis solutioncrawler solutioncrawlerregistrationservice workcoordinator incrementalanalyzerprocessor c b d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft codeanalysis solutioncrawler solutioncrawlerregistrationservice workcoordinator incrementalanalyzerprocessor d movenext | 1 |
2,774 | 27,608,623,281 | IssuesEvent | 2023-03-09 14:36:15 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | opened | [FEA] Allow input split on CUDF too long exceptions | feature request ? - Needs Triage reliability | **Is your feature request related to a problem? Please describe.**
Once https://github.com/NVIDIA/spark-rapids/issues/7866 goes in we will have the ability to split input data when executing expressions. But one of the limitations with CUDF is that of a int as the index in offsets. This limits the maximum size any column can be, including strings and lists/arrays. This is especially problematic for deeply nested types.
We should have a way to catch an exception from cudf and split/retry the operation if that exception is one that indicates that the output was too long for CUDF to support. We are likely going to have to make changes to CUDF and to the CUDF JNI APIs so it is easy to tell when this happens. | True | [FEA] Allow input split on CUDF too long exceptions - **Is your feature request related to a problem? Please describe.**
Once https://github.com/NVIDIA/spark-rapids/issues/7866 goes in we will have the ability to split input data when executing expressions. But one of the limitations with CUDF is that of a int as the index in offsets. This limits the maximum size any column can be, including strings and lists/arrays. This is especially problematic for deeply nested types.
We should have a way to catch an exception from cudf and split/retry the operation if that exception is one that indicates that the output was too long for CUDF to support. We are likely going to have to make changes to CUDF and to the CUDF JNI APIs so it is easy to tell when this happens. | reli | allow input split on cudf too long exceptions is your feature request related to a problem please describe once goes in we will have the ability to split input data when executing expressions but one of the limitations with cudf is that of a int as the index in offsets this limits the maximum size any column can be including strings and lists arrays this is especially problematic for deeply nested types we should have a way to catch an exception from cudf and split retry the operation if that exception is one that indicates that the output was too long for cudf to support we are likely going to have to make changes to cudf and to the cudf jni apis so it is easy to tell when this happens | 1 |
250,468 | 18,891,329,154 | IssuesEvent | 2021-11-15 13:33:14 | aws/aws-sdk-go-v2 | https://api.github.com/repos/aws/aws-sdk-go-v2 | closed | Documentation issue for S3Uri in InputDataConfig | documentation response-requested closing-soon | **Describe the issue with documentation**
`InputDataConfig` for `StartEntitiesDetectionJob` documentation states that S3Uri should be prefixed with S3://bucketname/prefix, but when I do that I get a `ValidationException` with this message:
`2 validation errors detected: Value 'S3://mybucket/tmp/nlp/input/3e136ed6-c994-41dc-9258-36b1f16f1f13/interaction-4745004.txt' at 'inputDataConfig.s3Uri' failed to satisfy constraint: Member must satisfy regular expression pattern: s3://[a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9](/.*)?; Value 'S3://mybucket/tmp/nlp/output/3e136ed6-c994-41dc-9258-36b1f16f1f13/interaction-4745004.txt' at 'outputDataConfig.s3Uri' failed to satisfy constraint: Member must satisfy regular expression pattern: s3://[a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9](/.*)?`
**To Reproduce (observed behavior)**
```
jobResp, err := client.StartEntitiesDetectionJob(context.Background(), &comprehend.StartEntitiesDetectionJobInput{
DataAccessRoleArn: aws.String("arn:aws:iam::<accountNum>:role/ComprehendDataAccessRole"),
InputDataConfig: &types.InputDataConfig{
S3Uri: aws.String("S3://mybucket/tmp/nlp/output/3e136ed6-c994-41dc-9258-36b1f16f1f13/interaction-4745004.txt"),
InputFormat: "ONE_DOC_PER_FILE",
},
LanguageCode: "en",
OutputDataConfig: &types.OutputDataConfig{
S3Uri: aws.String(bucketPrefix + outputFileKey),
},
})
```
**Expected behavior**
Docs should be updated or this error should not be returned.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here. | 1.0 | Documentation issue for S3Uri in InputDataConfig - **Describe the issue with documentation**
`InputDataConfig` for `StartEntitiesDetectionJob` documentation states that S3Uri should be prefixed with S3://bucketname/prefix, but when I do that I get a `ValidationException` with this message:
`2 validation errors detected: Value 'S3://mybucket/tmp/nlp/input/3e136ed6-c994-41dc-9258-36b1f16f1f13/interaction-4745004.txt' at 'inputDataConfig.s3Uri' failed to satisfy constraint: Member must satisfy regular expression pattern: s3://[a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9](/.*)?; Value 'S3://mybucket/tmp/nlp/output/3e136ed6-c994-41dc-9258-36b1f16f1f13/interaction-4745004.txt' at 'outputDataConfig.s3Uri' failed to satisfy constraint: Member must satisfy regular expression pattern: s3://[a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9](/.*)?`
**To Reproduce (observed behavior)**
```
jobResp, err := client.StartEntitiesDetectionJob(context.Background(), &comprehend.StartEntitiesDetectionJobInput{
DataAccessRoleArn: aws.String("arn:aws:iam::<accountNum>:role/ComprehendDataAccessRole"),
InputDataConfig: &types.InputDataConfig{
S3Uri: aws.String("S3://mybucket/tmp/nlp/output/3e136ed6-c994-41dc-9258-36b1f16f1f13/interaction-4745004.txt"),
InputFormat: "ONE_DOC_PER_FILE",
},
LanguageCode: "en",
OutputDataConfig: &types.OutputDataConfig{
S3Uri: aws.String(bucketPrefix + outputFileKey),
},
})
```
**Expected behavior**
Docs should be updated or this error should not be returned.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here. | non_reli | documentation issue for in inputdataconfig describe the issue with documentation inputdataconfig for startentitiesdetectionjob documentation states that should be prefixed with bucketname prefix but when i do that i get a validationexception with this message validation errors detected value mybucket tmp nlp input interaction txt at inputdataconfig failed to satisfy constraint member must satisfy regular expression pattern value mybucket tmp nlp output interaction txt at outputdataconfig failed to satisfy constraint member must satisfy regular expression pattern to reproduce observed behavior jobresp err client startentitiesdetectionjob context background comprehend startentitiesdetectionjobinput dataaccessrolearn aws string arn aws iam role comprehenddataaccessrole inputdataconfig types inputdataconfig aws string mybucket tmp nlp output interaction txt inputformat one doc per file languagecode en outputdataconfig types outputdataconfig aws string bucketprefix outputfilekey expected behavior docs should be updated or this error should not be returned screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here | 0 |
113,090 | 17,115,727,259 | IssuesEvent | 2021-07-11 10:01:34 | turkdevops/weblate | https://api.github.com/repos/turkdevops/weblate | opened | CVE-2020-11023 (Medium) detected in jquery-1.11.1.min.js, jquery-1.11.2.min.js | security vulnerability | ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.11.1.min.js</b>, <b>jquery-1.11.2.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.11.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.1/jquery.min.js</a></p>
<p>Path to dependency file: weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/rtl/input-groups.html</p>
<p>Path to vulnerable library: weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/rtl/input-groups.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/carousel/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.11.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.2/jquery.min.js</a></p>
<p>Path to dependency file: weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/jumbotron/index.html</p>
<p>Path to vulnerable library: weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/jumbotron/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/sticky-footer-navbar/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/navbar/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/carousel/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/dashboard/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/starter-template/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/non-responsive/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/tooltip-viewport/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/offcanvas/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/bootstrap/docs/examples/theme/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/navbar-fixed-top/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/navbar-static-top/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/theme/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/blog/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/cover/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.2.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/weblate/commit/3aa9343e44c1c7fd2f515bd352399f25838354a4">3aa9343e44c1c7fd2f515bd352399f25838354a4</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11023 (Medium) detected in jquery-1.11.1.min.js, jquery-1.11.2.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.11.1.min.js</b>, <b>jquery-1.11.2.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.11.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.1/jquery.min.js</a></p>
<p>Path to dependency file: weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/rtl/input-groups.html</p>
<p>Path to vulnerable library: weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/rtl/input-groups.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/carousel/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.11.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.2/jquery.min.js</a></p>
<p>Path to dependency file: weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/jumbotron/index.html</p>
<p>Path to vulnerable library: weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/jumbotron/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/sticky-footer-navbar/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/navbar/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/carousel/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/dashboard/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/starter-template/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/non-responsive/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/tooltip-viewport/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/offcanvas/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/bootstrap/docs/examples/theme/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/navbar-fixed-top/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/navbar-static-top/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/theme/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/blog/index.html,weblate/scripts/yarn/node_modules/bootstrap-rtl/examples/originals/examples/cover/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.2.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/weblate/commit/3aa9343e44c1c7fd2f515bd352399f25838354a4">3aa9343e44c1c7fd2f515bd352399f25838354a4</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_reli | cve medium detected in jquery min js jquery min js cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file weblate scripts yarn node modules bootstrap rtl examples rtl input groups html path to vulnerable library weblate scripts yarn node modules bootstrap rtl examples rtl input groups html weblate scripts yarn node modules bootstrap rtl examples carousel index html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file weblate scripts yarn node modules bootstrap rtl examples originals examples jumbotron index html path to vulnerable library weblate scripts yarn node modules bootstrap rtl examples originals examples jumbotron index html weblate scripts yarn node modules bootstrap rtl examples originals examples sticky footer navbar index html weblate scripts yarn node modules bootstrap rtl examples originals examples navbar index html weblate scripts yarn node modules bootstrap rtl examples originals examples carousel index html weblate scripts yarn node modules bootstrap rtl examples originals examples dashboard index html weblate scripts yarn node modules bootstrap rtl examples originals examples starter template index html weblate scripts yarn node modules bootstrap rtl examples originals examples non responsive index html weblate scripts yarn node modules bootstrap rtl examples originals examples tooltip viewport index html weblate scripts yarn node modules bootstrap rtl examples originals examples offcanvas index html weblate scripts yarn node modules bootstrap rtl bootstrap docs examples theme index html weblate scripts yarn node modules bootstrap rtl examples originals examples navbar fixed top index html weblate scripts yarn node modules bootstrap rtl examples originals examples navbar static top index html weblate scripts yarn node modules bootstrap rtl examples originals examples theme index html weblate scripts yarn node modules bootstrap rtl examples originals examples blog index html weblate scripts yarn node modules bootstrap rtl examples originals examples cover index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource | 0 |
2,961 | 30,643,711,914 | IssuesEvent | 2023-07-25 01:36:49 | crossplane/crossplane | https://api.github.com/repos/crossplane/crossplane | opened | Install and Upgrade reliability | reliability user experience roadmap package | ### What problem are you facing?
This is a tracking Epic for the general theme of reliability during package installations and upgrades. The goals from the effort spent on this epic would be to enable Crossplane users to be able to:
* **install** packages successfully (no errors) and cleanly (all resources are healthy/running)
* **uninstall** packages successfully (no errors) and cleanly (all resources are fully removed from the cluster)
* **upgrade** packages successfully (no errors) and cleanly (all resources are transitioned from previous version to new version)
* **rollback** packages successfully (no errors) and cleanly (all resources are transitioned from current version to previous version)
Basically, we want users to be able to perform all package operations with a very high degree of confidence that the intended operation succeeds cleanly and the control plane is left in the expected resultant state of the package operation.
```[tasklist]
### Tasks
- [ ] https://github.com/crossplane/crossplane/issues/3742
- [ ] https://github.com/crossplane/crossplane/issues/3423
- [ ] https://github.com/crossplane/crossplane/issues/3985
- [ ] https://github.com/crossplane/crossplane/issues/3784
- [ ] https://github.com/crossplane/crossplane/issues/4218
- [ ] https://github.com/crossplane/crossplane/issues/4063
- [ ] https://github.com/crossplane/crossplane/issues/3598
``` | True | Install and Upgrade reliability - ### What problem are you facing?
This is a tracking Epic for the general theme of reliability during package installations and upgrades. The goals from the effort spent on this epic would be to enable Crossplane users to be able to:
* **install** packages successfully (no errors) and cleanly (all resources are healthy/running)
* **uninstall** packages successfully (no errors) and cleanly (all resources are fully removed from the cluster)
* **upgrade** packages successfully (no errors) and cleanly (all resources are transitioned from previous version to new version)
* **rollback** packages successfully (no errors) and cleanly (all resources are transitioned from current version to previous version)
Basically, we want users to be able to perform all package operations with a very high degree of confidence that the intended operation succeeds cleanly and the control plane is left in the expected resultant state of the package operation.
```[tasklist]
### Tasks
- [ ] https://github.com/crossplane/crossplane/issues/3742
- [ ] https://github.com/crossplane/crossplane/issues/3423
- [ ] https://github.com/crossplane/crossplane/issues/3985
- [ ] https://github.com/crossplane/crossplane/issues/3784
- [ ] https://github.com/crossplane/crossplane/issues/4218
- [ ] https://github.com/crossplane/crossplane/issues/4063
- [ ] https://github.com/crossplane/crossplane/issues/3598
``` | reli | install and upgrade reliability what problem are you facing this is a tracking epic for the general theme of reliability during package installations and upgrades the goals from the effort spent on this epic would be to enable crossplane users to be able to install packages successfully no errors and cleanly all resources are healthy running uninstall packages successfully no errors and cleanly all resources are fully removed from the cluster upgrade packages successfully no errors and cleanly all resources are transitioned from previous version to new version rollback packages successfully no errors and cleanly all resources are transitioned from current version to previous version basically we want users to be able to perform all package operations with a very high degree of confidence that the intended operation succeeds cleanly and the control plane is left in the expected resultant state of the package operation tasks | 1 |
623 | 9,106,544,507 | IssuesEvent | 2019-02-21 00:19:13 | Microsoft/VFSForGit | https://api.github.com/repos/Microsoft/VFSForGit | opened | Add functional tests that ensure VFS4G loads and processes persisted background tasks properly | macOS reliability windows | We currently don't have functional test coverage for the scenario where persisted (and unprocessed) background tasks are loaded and processed correctly when mounting a repo. | True | Add functional tests that ensure VFS4G loads and processes persisted background tasks properly - We currently don't have functional test coverage for the scenario where persisted (and unprocessed) background tasks are loaded and processed correctly when mounting a repo. | reli | add functional tests that ensure loads and processes persisted background tasks properly we currently don t have functional test coverage for the scenario where persisted and unprocessed background tasks are loaded and processed correctly when mounting a repo | 1 |
30,098 | 14,405,412,846 | IssuesEvent | 2020-12-03 18:40:03 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | closed | [IP Sprint] Research UpdatedCachedAppealsAttributes cache strategy improvements | Eng: Performance Type: Tech-Improvement | ## Description
Research Dive to figure out our best options for caching the data in UpdateCachedAppealAttributes
### AC
- Which fields are prone to updating often, which arent. which external source do these pull from, how does hearings use the fields
- Which fields should stay in the job & the Postgres DB cache, which should we move to redis
- how does Caseflow currently use redis
- For any that we move to the redis cache, should we write a tighter-scoped cache priming job to do the first heavy query?
- once common changed fields are out of the job, how often does it need to run? | True | [IP Sprint] Research UpdatedCachedAppealsAttributes cache strategy improvements - ## Description
Research Dive to figure out our best options for caching the data in UpdateCachedAppealAttributes
### AC
- Which fields are prone to updating often, which arent. which external source do these pull from, how does hearings use the fields
- Which fields should stay in the job & the Postgres DB cache, which should we move to redis
- how does Caseflow currently use redis
- For any that we move to the redis cache, should we write a tighter-scoped cache priming job to do the first heavy query?
- once common changed fields are out of the job, how often does it need to run? | non_reli | research updatedcachedappealsattributes cache strategy improvements description research dive to figure out our best options for caching the data in updatecachedappealattributes ac which fields are prone to updating often which arent which external source do these pull from how does hearings use the fields which fields should stay in the job the postgres db cache which should we move to redis how does caseflow currently use redis for any that we move to the redis cache should we write a tighter scoped cache priming job to do the first heavy query once common changed fields are out of the job how often does it need to run | 0 |
1,336 | 15,056,369,114 | IssuesEvent | 2021-02-03 20:04:44 | FoundationDB/fdb-kubernetes-operator | https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator | closed | Replacing pods when changing node selector | reliability | When the user changes the node selector in the pod spec, it has the potential to force the pod to schedule to a new node. If the cluster is running on local persistent volumes, this will leave the pod in a state where it can't schedule. I think it's safer to replace the pods when changing node selectors, rather than deleting and recreating them with the same volume. | True | Replacing pods when changing node selector - When the user changes the node selector in the pod spec, it has the potential to force the pod to schedule to a new node. If the cluster is running on local persistent volumes, this will leave the pod in a state where it can't schedule. I think it's safer to replace the pods when changing node selectors, rather than deleting and recreating them with the same volume. | reli | replacing pods when changing node selector when the user changes the node selector in the pod spec it has the potential to force the pod to schedule to a new node if the cluster is running on local persistent volumes this will leave the pod in a state where it can t schedule i think it s safer to replace the pods when changing node selectors rather than deleting and recreating them with the same volume | 1 |
2,852 | 28,239,033,590 | IssuesEvent | 2023-04-06 05:02:57 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | closed | [BUG] vector leaked when running NDS 3TB with memory restricted | bug reliability | **Describe the bug**
While trying to repro #7581 by running NDS 3TB with memory restricted to 6GB, and refcount debugging enabled, I saw the following vector leak.
```
Executor task launch worker for task 75.2 in stage 749.0 (TID 36368) 23/04/05 14:35:54:337 WARN RapidsBufferCatalog: device memory store spilling to reduce usage from 353594624 total (117886720 spillable) to 0 bytes
Executor task launch worker for task 75.2 in stage 749.0 (TID 36368) 23/04/05 14:35:54:337 WARN RapidsBufferCatalog: Targeting a host memory size of 34241851648. Current total 3072052224. Current spillable 3072052224
Cleaner Thread 23/04/05 14:35:54:340 ERROR MemoryCleaner: Leaked vector (ID: 861372): 2023-04-05 14:35:52.0426 UTC: INC
java.lang.Thread.getStackTrace(Thread.java:1559)
ai.rapids.cudf.MemoryCleaner$RefCountDebugItem.<init>(MemoryCleaner.java:333)
ai.rapids.cudf.MemoryCleaner$Cleaner.addRef(MemoryCleaner.java:91)
ai.rapids.cudf.ColumnVector.incRefCountInternal(ColumnVector.java:251)
ai.rapids.cudf.ColumnVector.<init>(ColumnVector.java:159)
ai.rapids.cudf.ColumnVector.fromViewWithContiguousAllocation(ColumnVector.java:200)
ai.rapids.cudf.Table.fromPackedTable(Table.java:3550)
ai.rapids.cudf.ContiguousTable.getTable(ContiguousTable.java:76)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$6(RmmRapidsRetryIterator.scala:591)
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
scala.collection.TraversableLike.map(TraversableLike.scala:286)
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$5(RmmRapidsRetryIterator.scala:591)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:55)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:53)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withResource(RmmRapidsRetryIterator.scala:28)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$4(RmmRapidsRetryIterator.scala:590)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withResource(RmmRapidsRetryIterator.scala:28)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$3(RmmRapidsRetryIterator.scala:588)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withResource(RmmRapidsRetryIterator.scala:28)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$2(RmmRapidsRetryIterator.scala:587)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withResource(RmmRapidsRetryIterator.scala:28)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$1(RmmRapidsRetryIterator.scala:582)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$AutoCloseableAttemptSpliterator.split(RmmRapidsRetryIterator.scala:414)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.next(RmmRapidsRetryIterator.scala:519)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryAutoCloseableIterator.next(RmmRapidsRetryIterator.scala:460)
scala.collection.Iterator.toStream(Iterator.scala:1417)
scala.collection.Iterator.toStream$(Iterator.scala:1416)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.toStream(RmmRapidsRetryIterator.scala:479)
scala.collection.TraversableOnce.toSeq(TraversableOnce.scala:354)
scala.collection.TraversableOnce.toSeq$(TraversableOnce.scala:354)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.toSeq(RmmRapidsRetryIterator.scala:479)
com.nvidia.spark.rapids.GpuHashAggregateIterator$AggHelper.aggregate(aggregate.scala:289)
com.nvidia.spark.rapids.GpuHashAggregateIterator$.aggregate(aggregate.scala:395)
com.nvidia.spark.rapids.GpuHashAggregateIterator$.$anonfun$computeAggregateAndClose$1(aggregate.scala:424)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
com.nvidia.spark.rapids.GpuHashAggregateIterator$.withResource(aggregate.scala:156)
com.nvidia.spark.rapids.GpuHashAggregateIterator$.computeAggregateAndClose(aggregate.scala:415)
com.nvidia.spark.rapids.GpuHashAggregateIterator.aggregateInputBatches(aggregate.scala:603)
com.nvidia.spark.rapids.GpuHashAggregateIterator.$anonfun$next$2(aggregate.scala:555)
scala.Option.getOrElse(Option.scala:189)
com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:552)
com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:497)
org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase$$anon$1.partNextBatch(GpuShuffleExchangeExecBase.scala:318)
org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase$$anon$1.hasNext(GpuShuffleExchangeExecBase.scala:340)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.$anonfun$write$2(RapidsShuffleInternalManagerBase.scala:281)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.$anonfun$write$2$adapted(RapidsShuffleInternalManagerBase.scala:274)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.withResource(RapidsShuffleInternalManagerBase.scala:234)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.$anonfun$write$1(RapidsShuffleInternalManagerBase.scala:274)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.$anonfun$write$1$adapted(RapidsShuffleInternalManagerBase.scala:273)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.withResource(RapidsShuffleInternalManagerBase.scala:234)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.write(RapidsShuffleInternalManagerBase.scala:273)
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
org.apache.spark.scheduler.Task.run(Task.scala:131)
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
```
| True | [BUG] vector leaked when running NDS 3TB with memory restricted - **Describe the bug**
While trying to repro #7581 by running NDS 3TB with memory restricted to 6GB, and refcount debugging enabled, I saw the following vector leak.
```
Executor task launch worker for task 75.2 in stage 749.0 (TID 36368) 23/04/05 14:35:54:337 WARN RapidsBufferCatalog: device memory store spilling to reduce usage from 353594624 total (117886720 spillable) to 0 bytes
Executor task launch worker for task 75.2 in stage 749.0 (TID 36368) 23/04/05 14:35:54:337 WARN RapidsBufferCatalog: Targeting a host memory size of 34241851648. Current total 3072052224. Current spillable 3072052224
Cleaner Thread 23/04/05 14:35:54:340 ERROR MemoryCleaner: Leaked vector (ID: 861372): 2023-04-05 14:35:52.0426 UTC: INC
java.lang.Thread.getStackTrace(Thread.java:1559)
ai.rapids.cudf.MemoryCleaner$RefCountDebugItem.<init>(MemoryCleaner.java:333)
ai.rapids.cudf.MemoryCleaner$Cleaner.addRef(MemoryCleaner.java:91)
ai.rapids.cudf.ColumnVector.incRefCountInternal(ColumnVector.java:251)
ai.rapids.cudf.ColumnVector.<init>(ColumnVector.java:159)
ai.rapids.cudf.ColumnVector.fromViewWithContiguousAllocation(ColumnVector.java:200)
ai.rapids.cudf.Table.fromPackedTable(Table.java:3550)
ai.rapids.cudf.ContiguousTable.getTable(ContiguousTable.java:76)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$6(RmmRapidsRetryIterator.scala:591)
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
scala.collection.TraversableLike.map(TraversableLike.scala:286)
scala.collection.TraversableLike.map$(TraversableLike.scala:279)
scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$5(RmmRapidsRetryIterator.scala:591)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:55)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:53)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withResource(RmmRapidsRetryIterator.scala:28)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$4(RmmRapidsRetryIterator.scala:590)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withResource(RmmRapidsRetryIterator.scala:28)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$3(RmmRapidsRetryIterator.scala:588)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withResource(RmmRapidsRetryIterator.scala:28)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$2(RmmRapidsRetryIterator.scala:587)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withResource(RmmRapidsRetryIterator.scala:28)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$.$anonfun$splitSpillableInHalfByRows$1(RmmRapidsRetryIterator.scala:582)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$AutoCloseableAttemptSpliterator.split(RmmRapidsRetryIterator.scala:414)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.next(RmmRapidsRetryIterator.scala:519)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryAutoCloseableIterator.next(RmmRapidsRetryIterator.scala:460)
scala.collection.Iterator.toStream(Iterator.scala:1417)
scala.collection.Iterator.toStream$(Iterator.scala:1416)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.toStream(RmmRapidsRetryIterator.scala:479)
scala.collection.TraversableOnce.toSeq(TraversableOnce.scala:354)
scala.collection.TraversableOnce.toSeq$(TraversableOnce.scala:354)
com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.toSeq(RmmRapidsRetryIterator.scala:479)
com.nvidia.spark.rapids.GpuHashAggregateIterator$AggHelper.aggregate(aggregate.scala:289)
com.nvidia.spark.rapids.GpuHashAggregateIterator$.aggregate(aggregate.scala:395)
com.nvidia.spark.rapids.GpuHashAggregateIterator$.$anonfun$computeAggregateAndClose$1(aggregate.scala:424)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
com.nvidia.spark.rapids.GpuHashAggregateIterator$.withResource(aggregate.scala:156)
com.nvidia.spark.rapids.GpuHashAggregateIterator$.computeAggregateAndClose(aggregate.scala:415)
com.nvidia.spark.rapids.GpuHashAggregateIterator.aggregateInputBatches(aggregate.scala:603)
com.nvidia.spark.rapids.GpuHashAggregateIterator.$anonfun$next$2(aggregate.scala:555)
scala.Option.getOrElse(Option.scala:189)
com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:552)
com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:497)
org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase$$anon$1.partNextBatch(GpuShuffleExchangeExecBase.scala:318)
org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase$$anon$1.hasNext(GpuShuffleExchangeExecBase.scala:340)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.$anonfun$write$2(RapidsShuffleInternalManagerBase.scala:281)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.$anonfun$write$2$adapted(RapidsShuffleInternalManagerBase.scala:274)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.withResource(RapidsShuffleInternalManagerBase.scala:234)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.$anonfun$write$1(RapidsShuffleInternalManagerBase.scala:274)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.$anonfun$write$1$adapted(RapidsShuffleInternalManagerBase.scala:273)
com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.withResource(RapidsShuffleInternalManagerBase.scala:234)
org.apache.spark.sql.rapids.RapidsShuffleThreadedWriterBase.write(RapidsShuffleInternalManagerBase.scala:273)
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
org.apache.spark.scheduler.Task.run(Task.scala:131)
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
```
| reli | vector leaked when running nds with memory restricted describe the bug while trying to repro by running nds with memory restricted to and refcount debugging enabled i saw the following vector leak executor task launch worker for task in stage tid warn rapidsbuffercatalog device memory store spilling to reduce usage from total spillable to bytes executor task launch worker for task in stage tid warn rapidsbuffercatalog targeting a host memory size of current total current spillable cleaner thread error memorycleaner leaked vector id utc inc java lang thread getstacktrace thread java ai rapids cudf memorycleaner refcountdebugitem memorycleaner java ai rapids cudf memorycleaner cleaner addref memorycleaner java ai rapids cudf columnvector increfcountinternal columnvector java ai rapids cudf columnvector columnvector java ai rapids cudf columnvector fromviewwithcontiguousallocation columnvector java ai rapids cudf table frompackedtable table java ai rapids cudf contiguoustable gettable contiguoustable java com nvidia spark rapids rmmrapidsretryiterator anonfun splitspillableinhalfbyrows rmmrapidsretryiterator scala scala collection traversablelike anonfun map traversablelike scala scala collection indexedseqoptimized foreach indexedseqoptimized scala scala collection indexedseqoptimized foreach indexedseqoptimized scala scala collection mutable arrayops ofref foreach arrayops scala scala collection traversablelike map traversablelike scala scala collection traversablelike map traversablelike scala scala collection mutable arrayops ofref map arrayops scala com nvidia spark rapids rmmrapidsretryiterator anonfun splitspillableinhalfbyrows rmmrapidsretryiterator scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids rmmrapidsretryiterator withresource rmmrapidsretryiterator scala com nvidia spark rapids rmmrapidsretryiterator anonfun splitspillableinhalfbyrows rmmrapidsretryiterator scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids rmmrapidsretryiterator withresource rmmrapidsretryiterator scala com nvidia spark rapids rmmrapidsretryiterator anonfun splitspillableinhalfbyrows rmmrapidsretryiterator scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids rmmrapidsretryiterator withresource rmmrapidsretryiterator scala com nvidia spark rapids rmmrapidsretryiterator anonfun splitspillableinhalfbyrows rmmrapidsretryiterator scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids rmmrapidsretryiterator withresource rmmrapidsretryiterator scala com nvidia spark rapids rmmrapidsretryiterator anonfun splitspillableinhalfbyrows rmmrapidsretryiterator scala com nvidia spark rapids rmmrapidsretryiterator autocloseableattemptspliterator split rmmrapidsretryiterator scala com nvidia spark rapids rmmrapidsretryiterator rmmrapidsretryiterator next rmmrapidsretryiterator scala com nvidia spark rapids rmmrapidsretryiterator rmmrapidsretryautocloseableiterator next rmmrapidsretryiterator scala scala collection iterator tostream iterator scala scala collection iterator tostream iterator scala com nvidia spark rapids rmmrapidsretryiterator rmmrapidsretryiterator tostream rmmrapidsretryiterator scala scala collection traversableonce toseq traversableonce scala scala collection traversableonce toseq traversableonce scala com nvidia spark rapids rmmrapidsretryiterator rmmrapidsretryiterator toseq rmmrapidsretryiterator scala com nvidia spark rapids gpuhashaggregateiterator agghelper aggregate aggregate scala com nvidia spark rapids gpuhashaggregateiterator aggregate aggregate scala com nvidia spark rapids gpuhashaggregateiterator anonfun computeaggregateandclose aggregate scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids gpuhashaggregateiterator withresource aggregate scala com nvidia spark rapids gpuhashaggregateiterator computeaggregateandclose aggregate scala com nvidia spark rapids gpuhashaggregateiterator aggregateinputbatches aggregate scala com nvidia spark rapids gpuhashaggregateiterator anonfun next aggregate scala scala option getorelse option scala com nvidia spark rapids gpuhashaggregateiterator next aggregate scala com nvidia spark rapids gpuhashaggregateiterator next aggregate scala org apache spark sql rapids execution gpushuffleexchangeexecbase anon partnextbatch gpushuffleexchangeexecbase scala org apache spark sql rapids execution gpushuffleexchangeexecbase anon hasnext gpushuffleexchangeexecbase scala org apache spark sql rapids rapidsshufflethreadedwriterbase anonfun write rapidsshuffleinternalmanagerbase scala org apache spark sql rapids rapidsshufflethreadedwriterbase anonfun write adapted rapidsshuffleinternalmanagerbase scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids arm withresource arm scala org apache spark sql rapids rapidsshufflethreadedwriterbase withresource rapidsshuffleinternalmanagerbase scala org apache spark sql rapids rapidsshufflethreadedwriterbase anonfun write rapidsshuffleinternalmanagerbase scala org apache spark sql rapids rapidsshufflethreadedwriterbase anonfun write adapted rapidsshuffleinternalmanagerbase scala com nvidia spark rapids arm withresource arm scala com nvidia spark rapids arm withresource arm scala org apache spark sql rapids rapidsshufflethreadedwriterbase withresource rapidsshuffleinternalmanagerbase scala org apache spark sql rapids rapidsshufflethreadedwriterbase write rapidsshuffleinternalmanagerbase scala org apache spark shuffle shufflewriteprocessor write shufflewriteprocessor scala org apache spark scheduler shufflemaptask runtask shufflemaptask scala org apache spark scheduler shufflemaptask runtask shufflemaptask scala org apache spark scheduler task run task scala org apache spark executor executor taskrunner anonfun run executor scala org apache spark util utils trywithsafefinally utils scala org apache spark executor executor taskrunner run executor scala java util concurrent threadpoolexecutor runworker threadpoolexecutor java java util concurrent threadpoolexecutor worker run threadpoolexecutor java java lang thread run thread java | 1 |
274,708 | 23,859,360,102 | IssuesEvent | 2022-09-07 05:04:51 | godotengine/godot | https://api.github.com/repos/godotengine/godot | opened | Very random crashes when executing `SubViewport.set_size_2d_override_stretch` | bug topic:rendering needs testing crash | ### Godot version
4.0.alpha.custom_build. 4b164b8e4
### System information
Ubuntu 22.04 - Nvidia GTX 970, Gnome shell 42 X11
### Issue description
When executing random SubViewport function, then after a while(usually after 30min of project running), I have this crash
```
drivers/vulkan/rendering_device_vulkan.cpp:9014:68: runtime error: index 8 out of bounds for type 'VkSampleCountFlagBits [7]'
=================================================================
==15042==ERROR: AddressSanitizer: global-buffer-overflow on address 0x55605079d800 at pc 0x55603a80e2e2 bp 0x7ffd22933f40 sp 0x7ffd22933f30
READ of size 4 at 0x55605079d800 thread T0
#0 0x55603a80e2e1 in RenderingDeviceVulkan::_ensure_supported_sample_count(RenderingDevice::TextureSamples) const drivers/vulkan/rendering_device_vulkan.cpp:9014
#1 0x55603a70ddf7 in RenderingDeviceVulkan::texture_create(RenderingDevice::TextureFormat const&, RenderingDevice::TextureView const&, Vector<Vector<unsigned char> > const&) drivers/vulkan/rendering_device_vulkan.cpp:1736
#2 0x556048cf51ea in RendererRD::TextureStorage::_update_render_target(RendererRD::TextureStorage::RenderTarget*) servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:2203
#3 0x556048cfbf37 in RendererRD::TextureStorage::render_target_set_size(RID, int, int, unsigned int) servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:2329
#4 0x55604a4e467f in RendererViewport::viewport_set_size(RID, int, int) servers/rendering/renderer_viewport.cpp:840
#5 0x556047e52272 in RenderingServerDefault::viewport_set_size(RID, int, int) servers/rendering/rendering_server_default.h:583
#6 0x5560419a3c13 in Viewport::_set_size(Vector2i const&, Vector2i const&, Rect2i const&, Transform2D const&, bool) scene/main/viewport.cpp:799
#7 0x556041a5ac96 in SubViewport::set_size_2d_override_stretch(bool) scene/main/viewport.cpp:4072
#8 0x556033940369 in void call_with_variant_args_helper<__UnexistingClass, bool, 0ul>(__UnexistingClass*, void (__UnexistingClass::*)(bool), Variant const**, Callable::CallError&, IndexSequence<0ul>) core/variant/binder_common.h:262
#9 0x556033939049 in void call_with_variant_args_dv<__UnexistingClass, bool>(__UnexistingClass*, void (__UnexistingClass::*)(bool), Variant const**, int, Callable::CallError&, Vector<Variant> const&) core/variant/binder_common.h:409
#10 0x556033932620 in MethodBindT<bool>::call(Object*, Variant const**, int, Callable::CallError&) core/object/method_bind.h:320
#11 0x55604c4be894 in Object::callp(StringName const&, Variant const**, int, Callable::CallError&) core/object/object.cpp:733
#12 0x55604c4bd0cb in Object::callv(StringName const&, Array const&) core/object/object.cpp:670
#13 0x55604c53b02c in void call_with_variant_args_ret_helper<__UnexistingClass, Variant, StringName const&, Array const&, 0ul, 1ul>(__UnexistingClass*, Variant (__UnexistingClass::*)(StringName const&, Array const&), Variant const**, Variant&, Callable::CallError&, IndexSequence<0ul, 1ul>) core/variant/binder_common.h:680
#14 0x55604c534415 in void call_with_variant_args_ret_dv<__UnexistingClass, Variant, StringName const&, Array const&>(__UnexistingClass*, Variant (__UnexistingClass::*)(StringName const&, Array const&), Variant const**, int, Variant&, Callable::CallError&, Vector<Variant> const&) core/variant/binder_common.h:493
#15 0x55604c52c80e in MethodBindTR<Variant, StringName const&, Array const&>::call(Object*, Variant const**, int, Callable::CallError&) core/object/method_bind.h:481
#16 0x5560357d6917 in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Callable::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_vm.cpp:1644
#17 0x55603520bc71 in GDScriptInstance::callp(StringName const&, Variant const**, int, Callable::CallError&) modules/gdscript/gdscript.cpp:1627
#18 0x55604c4be497 in Object::callp(StringName const&, Variant const**, int, Callable::CallError&) core/object/object.cpp:711
#19 0x55604ba76de7 in Variant::callp(StringName const&, Variant const**, int, Variant&, Callable::CallError&) core/variant/variant_call.cpp:1048
#20 0x5560357d444c in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Callable::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_vm.cpp:1555
#21 0x55603520bc71 in GDScriptInstance::callp(StringName const&, Variant const**, int, Callable::CallError&) modules/gdscript/gdscript.cpp:1627
#22 0x5560417eb4c9 in bool Node::_gdvirtual__process_call<false>(double) scene/main/node.h:237
#23 0x556041750f92 in Node::_notification(int) scene/main/node.cpp:56
#24 0x556033e00319 in Node::_notificationv(int, bool) scene/main/node.h:45
#25 0x55604c4bfd71 in Object::notification(int, bool) core/object/object.cpp:790
#26 0x5560418aba3a in SceneTree::_notify_group_pause(StringName const&, int) scene/main/scene_tree.cpp:917
#27 0x55604189c717 in SceneTree::process(double) scene/main/scene_tree.cpp:465
#28 0x5560336e7c39 in Main::iteration() main/main.cpp:2992
#29 0x55603352caf3 in OS_LinuxBSD::run() platform/linuxbsd/os_linuxbsd.cpp:538
#30 0x556033513892 in main platform/linuxbsd/godot_linuxbsd.cpp:72
#31 0x7fec82c06082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
#32 0x55603351332d in _start (/home/runner/work/Qarminer/Qarminer/godot.linuxbsd.tools.x86_64.san+0x36e3632d)
0x55605079d800 is located 4 bytes to the right of global variable 'rasterization_sample_count' defined in 'drivers/vulkan/rendering_device_vulkan.cpp:1233:29' (0x55605079d7e0) of size 28
0x55605079d800 is located 32 bytes to the left of global variable 'logic_operations' defined in 'drivers/vulkan/rendering_device_vulkan.cpp:1243:17' (0x55605079d820) of size 64
```
This may be regression(probably happens max ~1 month)
https://github.com/godotengine/godot/blob/02d510bd079b0730f14680f75a1325ce1da0ac09/drivers/vulkan/rendering_device_vulkan.cpp#L9014
### Steps to reproduce
Not easily reproducible
### Minimal reproduction project
_No response_ | 1.0 | Very random crashes when executing `SubViewport.set_size_2d_override_stretch` - ### Godot version
4.0.alpha.custom_build. 4b164b8e4
### System information
Ubuntu 22.04 - Nvidia GTX 970, Gnome shell 42 X11
### Issue description
When executing random SubViewport function, then after a while(usually after 30min of project running), I have this crash
```
drivers/vulkan/rendering_device_vulkan.cpp:9014:68: runtime error: index 8 out of bounds for type 'VkSampleCountFlagBits [7]'
=================================================================
==15042==ERROR: AddressSanitizer: global-buffer-overflow on address 0x55605079d800 at pc 0x55603a80e2e2 bp 0x7ffd22933f40 sp 0x7ffd22933f30
READ of size 4 at 0x55605079d800 thread T0
#0 0x55603a80e2e1 in RenderingDeviceVulkan::_ensure_supported_sample_count(RenderingDevice::TextureSamples) const drivers/vulkan/rendering_device_vulkan.cpp:9014
#1 0x55603a70ddf7 in RenderingDeviceVulkan::texture_create(RenderingDevice::TextureFormat const&, RenderingDevice::TextureView const&, Vector<Vector<unsigned char> > const&) drivers/vulkan/rendering_device_vulkan.cpp:1736
#2 0x556048cf51ea in RendererRD::TextureStorage::_update_render_target(RendererRD::TextureStorage::RenderTarget*) servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:2203
#3 0x556048cfbf37 in RendererRD::TextureStorage::render_target_set_size(RID, int, int, unsigned int) servers/rendering/renderer_rd/storage_rd/texture_storage.cpp:2329
#4 0x55604a4e467f in RendererViewport::viewport_set_size(RID, int, int) servers/rendering/renderer_viewport.cpp:840
#5 0x556047e52272 in RenderingServerDefault::viewport_set_size(RID, int, int) servers/rendering/rendering_server_default.h:583
#6 0x5560419a3c13 in Viewport::_set_size(Vector2i const&, Vector2i const&, Rect2i const&, Transform2D const&, bool) scene/main/viewport.cpp:799
#7 0x556041a5ac96 in SubViewport::set_size_2d_override_stretch(bool) scene/main/viewport.cpp:4072
#8 0x556033940369 in void call_with_variant_args_helper<__UnexistingClass, bool, 0ul>(__UnexistingClass*, void (__UnexistingClass::*)(bool), Variant const**, Callable::CallError&, IndexSequence<0ul>) core/variant/binder_common.h:262
#9 0x556033939049 in void call_with_variant_args_dv<__UnexistingClass, bool>(__UnexistingClass*, void (__UnexistingClass::*)(bool), Variant const**, int, Callable::CallError&, Vector<Variant> const&) core/variant/binder_common.h:409
#10 0x556033932620 in MethodBindT<bool>::call(Object*, Variant const**, int, Callable::CallError&) core/object/method_bind.h:320
#11 0x55604c4be894 in Object::callp(StringName const&, Variant const**, int, Callable::CallError&) core/object/object.cpp:733
#12 0x55604c4bd0cb in Object::callv(StringName const&, Array const&) core/object/object.cpp:670
#13 0x55604c53b02c in void call_with_variant_args_ret_helper<__UnexistingClass, Variant, StringName const&, Array const&, 0ul, 1ul>(__UnexistingClass*, Variant (__UnexistingClass::*)(StringName const&, Array const&), Variant const**, Variant&, Callable::CallError&, IndexSequence<0ul, 1ul>) core/variant/binder_common.h:680
#14 0x55604c534415 in void call_with_variant_args_ret_dv<__UnexistingClass, Variant, StringName const&, Array const&>(__UnexistingClass*, Variant (__UnexistingClass::*)(StringName const&, Array const&), Variant const**, int, Variant&, Callable::CallError&, Vector<Variant> const&) core/variant/binder_common.h:493
#15 0x55604c52c80e in MethodBindTR<Variant, StringName const&, Array const&>::call(Object*, Variant const**, int, Callable::CallError&) core/object/method_bind.h:481
#16 0x5560357d6917 in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Callable::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_vm.cpp:1644
#17 0x55603520bc71 in GDScriptInstance::callp(StringName const&, Variant const**, int, Callable::CallError&) modules/gdscript/gdscript.cpp:1627
#18 0x55604c4be497 in Object::callp(StringName const&, Variant const**, int, Callable::CallError&) core/object/object.cpp:711
#19 0x55604ba76de7 in Variant::callp(StringName const&, Variant const**, int, Variant&, Callable::CallError&) core/variant/variant_call.cpp:1048
#20 0x5560357d444c in GDScriptFunction::call(GDScriptInstance*, Variant const**, int, Callable::CallError&, GDScriptFunction::CallState*) modules/gdscript/gdscript_vm.cpp:1555
#21 0x55603520bc71 in GDScriptInstance::callp(StringName const&, Variant const**, int, Callable::CallError&) modules/gdscript/gdscript.cpp:1627
#22 0x5560417eb4c9 in bool Node::_gdvirtual__process_call<false>(double) scene/main/node.h:237
#23 0x556041750f92 in Node::_notification(int) scene/main/node.cpp:56
#24 0x556033e00319 in Node::_notificationv(int, bool) scene/main/node.h:45
#25 0x55604c4bfd71 in Object::notification(int, bool) core/object/object.cpp:790
#26 0x5560418aba3a in SceneTree::_notify_group_pause(StringName const&, int) scene/main/scene_tree.cpp:917
#27 0x55604189c717 in SceneTree::process(double) scene/main/scene_tree.cpp:465
#28 0x5560336e7c39 in Main::iteration() main/main.cpp:2992
#29 0x55603352caf3 in OS_LinuxBSD::run() platform/linuxbsd/os_linuxbsd.cpp:538
#30 0x556033513892 in main platform/linuxbsd/godot_linuxbsd.cpp:72
#31 0x7fec82c06082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
#32 0x55603351332d in _start (/home/runner/work/Qarminer/Qarminer/godot.linuxbsd.tools.x86_64.san+0x36e3632d)
0x55605079d800 is located 4 bytes to the right of global variable 'rasterization_sample_count' defined in 'drivers/vulkan/rendering_device_vulkan.cpp:1233:29' (0x55605079d7e0) of size 28
0x55605079d800 is located 32 bytes to the left of global variable 'logic_operations' defined in 'drivers/vulkan/rendering_device_vulkan.cpp:1243:17' (0x55605079d820) of size 64
```
This may be regression(probably happens max ~1 month)
https://github.com/godotengine/godot/blob/02d510bd079b0730f14680f75a1325ce1da0ac09/drivers/vulkan/rendering_device_vulkan.cpp#L9014
### Steps to reproduce
Not easily reproducible
### Minimal reproduction project
_No response_ | non_reli | very random crashes when executing subviewport set size override stretch godot version alpha custom build system information ubuntu nvidia gtx gnome shell issue description when executing random subviewport function then after a while usually after of project running i have this crash drivers vulkan rendering device vulkan cpp runtime error index out of bounds for type vksamplecountflagbits error addresssanitizer global buffer overflow on address at pc bp sp read of size at thread in renderingdevicevulkan ensure supported sample count renderingdevice texturesamples const drivers vulkan rendering device vulkan cpp in renderingdevicevulkan texture create renderingdevice textureformat const renderingdevice textureview const vector const drivers vulkan rendering device vulkan cpp in rendererrd texturestorage update render target rendererrd texturestorage rendertarget servers rendering renderer rd storage rd texture storage cpp in rendererrd texturestorage render target set size rid int int unsigned int servers rendering renderer rd storage rd texture storage cpp in rendererviewport viewport set size rid int int servers rendering renderer viewport cpp in renderingserverdefault viewport set size rid int int servers rendering rendering server default h in viewport set size const const const const bool scene main viewport cpp in subviewport set size override stretch bool scene main viewport cpp in void call with variant args helper unexistingclass void unexistingclass bool variant const callable callerror indexsequence core variant binder common h in void call with variant args dv unexistingclass void unexistingclass bool variant const int callable callerror vector const core variant binder common h in methodbindt call object variant const int callable callerror core object method bind h in object callp stringname const variant const int callable callerror core object object cpp in object callv stringname const array const core object object cpp in void call with variant args ret helper unexistingclass variant unexistingclass stringname const array const variant const variant callable callerror indexsequence core variant binder common h in void call with variant args ret dv unexistingclass variant unexistingclass stringname const array const variant const int variant callable callerror vector const core variant binder common h in methodbindtr call object variant const int callable callerror core object method bind h in gdscriptfunction call gdscriptinstance variant const int callable callerror gdscriptfunction callstate modules gdscript gdscript vm cpp in gdscriptinstance callp stringname const variant const int callable callerror modules gdscript gdscript cpp in object callp stringname const variant const int callable callerror core object object cpp in variant callp stringname const variant const int variant callable callerror core variant variant call cpp in gdscriptfunction call gdscriptinstance variant const int callable callerror gdscriptfunction callstate modules gdscript gdscript vm cpp in gdscriptinstance callp stringname const variant const int callable callerror modules gdscript gdscript cpp in bool node gdvirtual process call double scene main node h in node notification int scene main node cpp in node notificationv int bool scene main node h in object notification int bool core object object cpp in scenetree notify group pause stringname const int scene main scene tree cpp in scenetree process double scene main scene tree cpp in main iteration main main cpp in os linuxbsd run platform linuxbsd os linuxbsd cpp in main platform linuxbsd godot linuxbsd cpp in libc start main lib linux gnu libc so in start home runner work qarminer qarminer godot linuxbsd tools san is located bytes to the right of global variable rasterization sample count defined in drivers vulkan rendering device vulkan cpp of size is located bytes to the left of global variable logic operations defined in drivers vulkan rendering device vulkan cpp of size this may be regression probably happens max month steps to reproduce not easily reproducible minimal reproduction project no response | 0 |
255,958 | 8,126,602,019 | IssuesEvent | 2018-08-17 03:17:34 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | add support for pvtk files that group legacy vtk. | Bug Likelihood: 3 - Occasional Priority: Normal Severity: 3 - Major Irritation | add support for pvtk files with pieces that are legacy vtk files.
example pvtk file:
<pre>
<File version="pvtk-1.0"
dataType="vtkUnstructuredGrid"
numberOfPieces=" 2 ">
<Piece fileName="p1.vtk" />
<Piece fileName="p2.vtk" />
</File>
</pre>
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2815
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: add support for pvtk files that group legacy vtk.
Assigned to: Kathleen Biagas
Category:
Target version: 2.13.0
Author: Cyrus Harrison
Start: 05/05/2017
Due date:
% Done: 100
Estimated time: 4.0
Created: 05/05/2017 12:46 pm
Updated: 09/21/2017 06:13 pm
Likelihood: 3 - Occasional
Severity: 3 - Major Irritation
Found in version: 2.12.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
add support for pvtk files with pieces that are legacy vtk files.
example pvtk file:
<pre>
<File version="pvtk-1.0"
dataType="vtkUnstructuredGrid"
numberOfPieces=" 2 ">
<Piece fileName="p1.vtk" />
<Piece fileName="p2.vtk" />
</File>
</pre>
Comments:
this case can be done easily with a .visit file as well, but we should have out of the box support for pvtk.
I believe the vtkPDataSetReader class in VTK's IO/Parallel can be utilized for this, modified possibly for the way VisIt handles files.
Created a Parser for pvtk files, and added it to VTK reader.M databases/VTK/VTKPluginInfo.CA databases/VTK/PVTKParser.CM databases/VTK/avtVTKFileReader.CA databases/VTK/PVTKParser.hM databases/VTK/VTK.xmlM databases/VTK/CMakeLists.txt
| 1.0 | add support for pvtk files that group legacy vtk. - add support for pvtk files with pieces that are legacy vtk files.
example pvtk file:
<pre>
<File version="pvtk-1.0"
dataType="vtkUnstructuredGrid"
numberOfPieces=" 2 ">
<Piece fileName="p1.vtk" />
<Piece fileName="p2.vtk" />
</File>
</pre>
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2815
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: add support for pvtk files that group legacy vtk.
Assigned to: Kathleen Biagas
Category:
Target version: 2.13.0
Author: Cyrus Harrison
Start: 05/05/2017
Due date:
% Done: 100
Estimated time: 4.0
Created: 05/05/2017 12:46 pm
Updated: 09/21/2017 06:13 pm
Likelihood: 3 - Occasional
Severity: 3 - Major Irritation
Found in version: 2.12.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
add support for pvtk files with pieces that are legacy vtk files.
example pvtk file:
<pre>
<File version="pvtk-1.0"
dataType="vtkUnstructuredGrid"
numberOfPieces=" 2 ">
<Piece fileName="p1.vtk" />
<Piece fileName="p2.vtk" />
</File>
</pre>
Comments:
this case can be done easily with a .visit file as well, but we should have out of the box support for pvtk.
I believe the vtkPDataSetReader class in VTK's IO/Parallel can be utilized for this, modified possibly for the way VisIt handles files.
Created a Parser for pvtk files, and added it to VTK reader.M databases/VTK/VTKPluginInfo.CA databases/VTK/PVTKParser.CM databases/VTK/avtVTKFileReader.CA databases/VTK/PVTKParser.hM databases/VTK/VTK.xmlM databases/VTK/CMakeLists.txt
| non_reli | add support for pvtk files that group legacy vtk add support for pvtk files with pieces that are legacy vtk files example pvtk file file version pvtk datatype vtkunstructuredgrid numberofpieces redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject add support for pvtk files that group legacy vtk assigned to kathleen biagas category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood occasional severity major irritation found in version impact expected use os all support group any description add support for pvtk files with pieces that are legacy vtk files example pvtk file file version pvtk datatype vtkunstructuredgrid numberofpieces comments this case can be done easily with a visit file as well but we should have out of the box support for pvtk i believe the vtkpdatasetreader class in vtk s io parallel can be utilized for this modified possibly for the way visit handles files created a parser for pvtk files and added it to vtk reader m databases vtk vtkplugininfo ca databases vtk pvtkparser cm databases vtk avtvtkfilereader ca databases vtk pvtkparser hm databases vtk vtk xmlm databases vtk cmakelists txt | 0 |
2,004 | 22,409,719,441 | IssuesEvent | 2022-06-18 14:35:33 | adoptium/infrastructure | https://api.github.com/repos/adoptium/infrastructure | closed | ccache package fails to install on RHEL7 | ansible reliability awxDeployFailure | It should probably be left to install from source on this OS and no attempt made to install via package manager.
```
TASK [Common : Install additional build tools for RHEL 7] **********************
ok: [build-marist-rhel77-s390x-2] => (item=libstdc++-static)
ok: [build-marist-rhel77-s390x-1] => (item=libstdc++-static)
failed: [build-marist-rhel77-s390x-2] (item=ccache) => {"ansible_loop_var": "item", "changed": false, "item": "ccache", "msg": "No package matching 'ccache' found available, installed or updated", "rc": 126, "results": ["No package matching 'ccache' found available, installed or updated"]}
failed: [build-marist-rhel77-s390x-1] (item=ccache) => {"ansible_loop_var": "item", "changed": false, "item": "ccache", "msg": "No package matching 'ccache' found available, installed or updated", "rc": 126, "results": ["No package matching 'ccache' found available, installed or updated"]}
```
| True | ccache package fails to install on RHEL7 - It should probably be left to install from source on this OS and no attempt made to install via package manager.
```
TASK [Common : Install additional build tools for RHEL 7] **********************
ok: [build-marist-rhel77-s390x-2] => (item=libstdc++-static)
ok: [build-marist-rhel77-s390x-1] => (item=libstdc++-static)
failed: [build-marist-rhel77-s390x-2] (item=ccache) => {"ansible_loop_var": "item", "changed": false, "item": "ccache", "msg": "No package matching 'ccache' found available, installed or updated", "rc": 126, "results": ["No package matching 'ccache' found available, installed or updated"]}
failed: [build-marist-rhel77-s390x-1] (item=ccache) => {"ansible_loop_var": "item", "changed": false, "item": "ccache", "msg": "No package matching 'ccache' found available, installed or updated", "rc": 126, "results": ["No package matching 'ccache' found available, installed or updated"]}
```
| reli | ccache package fails to install on it should probably be left to install from source on this os and no attempt made to install via package manager task ok item libstdc static ok item libstdc static failed item ccache ansible loop var item changed false item ccache msg no package matching ccache found available installed or updated rc results failed item ccache ansible loop var item changed false item ccache msg no package matching ccache found available installed or updated rc results | 1 |
181,059 | 21,640,521,692 | IssuesEvent | 2022-05-05 18:17:01 | project-chip/connectedhomeip | https://api.github.com/repos/project-chip/connectedhomeip | closed | BLEEndPoint::Receive should probably check that it has data | p1 V1.0 security | #### Problem
`BLEEndPoint::Receive` does:
```
if (mBtpEngine.IsCommandPacket(data))
```
which will examine the first byte of `data`. But what if `data` has no payload bytes at all? Seems like we'd be reading random memory.
#### Proposed Solution
Check that we have a byte to read before reading it.
It looks like this code came in like this from Weave.... @pan-apple @turon
| True | BLEEndPoint::Receive should probably check that it has data - #### Problem
`BLEEndPoint::Receive` does:
```
if (mBtpEngine.IsCommandPacket(data))
```
which will examine the first byte of `data`. But what if `data` has no payload bytes at all? Seems like we'd be reading random memory.
#### Proposed Solution
Check that we have a byte to read before reading it.
It looks like this code came in like this from Weave.... @pan-apple @turon
| non_reli | bleendpoint receive should probably check that it has data problem bleendpoint receive does if mbtpengine iscommandpacket data which will examine the first byte of data but what if data has no payload bytes at all seems like we d be reading random memory proposed solution check that we have a byte to read before reading it it looks like this code came in like this from weave pan apple turon | 0 |
405,264 | 27,510,297,138 | IssuesEvent | 2023-03-06 08:16:29 | Kawbat/dd2480-jabref | https://api.github.com/repos/Kawbat/dd2480-jabref | closed | Write project description | documentation | Add a short project description together with a link to the original repository. | 1.0 | Write project description - Add a short project description together with a link to the original repository. | non_reli | write project description add a short project description together with a link to the original repository | 0 |
65,453 | 27,106,498,194 | IssuesEvent | 2023-02-15 12:30:42 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | reopened | Inheritance and override icons like in CLion | Language Service Feature Request more votes needed | Type: Feature Request
It would be nice to have icons on lines which show the following, like in CLion:
- the method overrides another one (a hover would indicate the overridden method)
- the method is overridden somewhere (a hover would indicate all overriding methods)
- the class/struct inherits (an)other(s) (a hover would indicate all parents)
- the class/struct is inherited somewhere (a hover would indicate all childs)
This is basically the same feature request as [this one](https://github.com/microsoft/python-language-server/issues/1641) but for C++.
An example on how it looks in CLion (not complete):

| 1.0 | Inheritance and override icons like in CLion - Type: Feature Request
It would be nice to have icons on lines which show the following, like in CLion:
- the method overrides another one (a hover would indicate the overridden method)
- the method is overridden somewhere (a hover would indicate all overriding methods)
- the class/struct inherits (an)other(s) (a hover would indicate all parents)
- the class/struct is inherited somewhere (a hover would indicate all childs)
This is basically the same feature request as [this one](https://github.com/microsoft/python-language-server/issues/1641) but for C++.
An example on how it looks in CLion (not complete):

| non_reli | inheritance and override icons like in clion type feature request it would be nice to have icons on lines which show the following like in clion the method overrides another one a hover would indicate the overridden method the method is overridden somewhere a hover would indicate all overriding methods the class struct inherits an other s a hover would indicate all parents the class struct is inherited somewhere a hover would indicate all childs this is basically the same feature request as but for c an example on how it looks in clion not complete | 0 |
3,001 | 30,877,805,297 | IssuesEvent | 2023-08-03 15:21:39 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | opened | IllegalStateArgument when removing job stream | kind/bug severity/high area/reliability | **Describe the bug**
An exception is thrown whenever the `JobStreamRemover` tries to remove a stream in the gateway. This is due to the future completing without an executor, thus completing within the actor context, and then calling `Actor.call`. As we already wanted to ensure that an executor was used, we should do that as well.
**To Reproduce**
Register a stream via the command. Cancel it. An exception is thrown and the stream is not removed from the gateway nor the broker (even if the client has gone away).
**Expected behavior**
The stream is removed and no error is thrown.
**Environment:**
- Zeebe Version: 8.3.0-alpha4
| True | IllegalStateArgument when removing job stream - **Describe the bug**
An exception is thrown whenever the `JobStreamRemover` tries to remove a stream in the gateway. This is due to the future completing without an executor, thus completing within the actor context, and then calling `Actor.call`. As we already wanted to ensure that an executor was used, we should do that as well.
**To Reproduce**
Register a stream via the command. Cancel it. An exception is thrown and the stream is not removed from the gateway nor the broker (even if the client has gone away).
**Expected behavior**
The stream is removed and no error is thrown.
**Environment:**
- Zeebe Version: 8.3.0-alpha4
| reli | illegalstateargument when removing job stream describe the bug an exception is thrown whenever the jobstreamremover tries to remove a stream in the gateway this is due to the future completing without an executor thus completing within the actor context and then calling actor call as we already wanted to ensure that an executor was used we should do that as well to reproduce register a stream via the command cancel it an exception is thrown and the stream is not removed from the gateway nor the broker even if the client has gone away expected behavior the stream is removed and no error is thrown environment zeebe version | 1 |
313,107 | 9,557,105,505 | IssuesEvent | 2019-05-03 10:25:50 | Abwasserrohr/SKYBLOCK.SK | https://api.github.com/repos/Abwasserrohr/SKYBLOCK.SK | opened | storage.sk: Make it possible to place signs on the side of the storage unit and update it | enhancement priority:low | Update storage information signs also on the side of the storage unit, not only if it is on top. | 1.0 | storage.sk: Make it possible to place signs on the side of the storage unit and update it - Update storage information signs also on the side of the storage unit, not only if it is on top. | non_reli | storage sk make it possible to place signs on the side of the storage unit and update it update storage information signs also on the side of the storage unit not only if it is on top | 0 |
70,803 | 15,110,460,915 | IssuesEvent | 2021-02-08 19:15:22 | idonthaveafifaaddiction/openthread | https://api.github.com/repos/idonthaveafifaaddiction/openthread | opened | CVE-2018-19608 (Medium) detected in https://source.codeaurora.org/quic/lc/external/github.com/openthread/openthread/upstream/thread-reference-20191113 | security vulnerability | ## CVE-2018-19608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>https://source.codeaurora.org/quic/lc/external/github.com/openthread/openthread/upstream/thread-reference-20191113</b></p></summary>
<p>
<p>Library home page: <a href=https://source.codeaurora.org/quic/lc/external/github.com/openthread/openthread/>https://source.codeaurora.org/quic/lc/external/github.com/openthread/openthread/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/idonthaveafifaaddiction/openthread/commit/2dd677e6bec4cc5e005bf3b02de7821ba23885af">2dd677e6bec4cc5e005bf3b02de7821ba23885af</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openthread/third_party/mbedtls/repo/library/bignum.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arm Mbed TLS before 2.14.1, before 2.7.8, and before 2.1.17 allows a local unprivileged attacker to recover the plaintext of RSA decryption, which is used in RSA-without-(EC)DH(E) cipher suites.
<p>Publish Date: 2018-12-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19608>CVE-2018-19608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19608</a></p>
<p>Fix Resolution: mbedtls-2.14.1</p>
</p>
</details>
<p></p>
| True | CVE-2018-19608 (Medium) detected in https://source.codeaurora.org/quic/lc/external/github.com/openthread/openthread/upstream/thread-reference-20191113 - ## CVE-2018-19608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>https://source.codeaurora.org/quic/lc/external/github.com/openthread/openthread/upstream/thread-reference-20191113</b></p></summary>
<p>
<p>Library home page: <a href=https://source.codeaurora.org/quic/lc/external/github.com/openthread/openthread/>https://source.codeaurora.org/quic/lc/external/github.com/openthread/openthread/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/idonthaveafifaaddiction/openthread/commit/2dd677e6bec4cc5e005bf3b02de7821ba23885af">2dd677e6bec4cc5e005bf3b02de7821ba23885af</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openthread/third_party/mbedtls/repo/library/bignum.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arm Mbed TLS before 2.14.1, before 2.7.8, and before 2.1.17 allows a local unprivileged attacker to recover the plaintext of RSA decryption, which is used in RSA-without-(EC)DH(E) cipher suites.
<p>Publish Date: 2018-12-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19608>CVE-2018-19608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19608</a></p>
<p>Fix Resolution: mbedtls-2.14.1</p>
</p>
</details>
<p></p>
| non_reli | cve medium detected in cve medium severity vulnerability vulnerable library library home page a href found in head commit a href found in base branch master vulnerable source files openthread third party mbedtls repo library bignum c vulnerability details arm mbed tls before before and before allows a local unprivileged attacker to recover the plaintext of rsa decryption which is used in rsa without ec dh e cipher suites publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href fix resolution mbedtls | 0 |
226,049 | 17,937,045,912 | IssuesEvent | 2021-09-10 16:40:01 | Stewart86/aioCloudflare | https://api.github.com/repos/Stewart86/aioCloudflare | opened | Add test mock support | testing | At some point user might not want to connect directly to Cloudflare API when doing unit test. A mocking support would definitely help.
Can use `respx` to construct the `pytest.fixture` | 1.0 | Add test mock support - At some point user might not want to connect directly to Cloudflare API when doing unit test. A mocking support would definitely help.
Can use `respx` to construct the `pytest.fixture` | non_reli | add test mock support at some point user might not want to connect directly to cloudflare api when doing unit test a mocking support would definitely help can use respx to construct the pytest fixture | 0 |
588,564 | 17,662,520,264 | IssuesEvent | 2021-08-21 20:11:19 | braem/moodi | https://api.github.com/repos/braem/moodi | closed | [Bug]: Font on tabbar not correct | Type: Bug Priority: Medium Size: Small | ### Describe the bug
Font family on tabbar doesn't match the app's default (comfortaa)
### To Reproduce
1. Open app
2. Navigate to any page with a tabbar (everywhere but new mood entry currently)
3. Notice the font doesnt match! aaaaaaaaaa
### Expected Behavior
Use Comfortaa!
### Additional context
This is a little more involved with app shell stuff ;( | 1.0 | [Bug]: Font on tabbar not correct - ### Describe the bug
Font family on tabbar doesn't match the app's default (comfortaa)
### To Reproduce
1. Open app
2. Navigate to any page with a tabbar (everywhere but new mood entry currently)
3. Notice the font doesnt match! aaaaaaaaaa
### Expected Behavior
Use Comfortaa!
### Additional context
This is a little more involved with app shell stuff ;( | non_reli | font on tabbar not correct describe the bug font family on tabbar doesn t match the app s default comfortaa to reproduce open app navigate to any page with a tabbar everywhere but new mood entry currently notice the font doesnt match aaaaaaaaaa expected behavior use comfortaa additional context this is a little more involved with app shell stuff | 0 |
255 | 5,735,786,641 | IssuesEvent | 2017-04-22 01:34:23 | Storj/bridge | https://api.github.com/repos/Storj/bridge | closed | Integrate and deploy auditing worker | reliability | While the farmer uptime monitor will catch many farmers that go offline, there also needs to be a process that will make sure that files still exist, if the files have not been retrieved successfully recently. These audits may not need to be an a predefined schedule, but would be triggered when a file has not been accessed within a duration of time.
There is existing work in this area at:
- https://github.com/Storj/bridge/pull/326
- https://github.com/Storj/service-auditor
This will also contribute to solving issues such as, as times that the file is stored can be recorded and used for payouts:
- https://github.com/Storj/bridge/issues/389 | True | Integrate and deploy auditing worker - While the farmer uptime monitor will catch many farmers that go offline, there also needs to be a process that will make sure that files still exist, if the files have not been retrieved successfully recently. These audits may not need to be an a predefined schedule, but would be triggered when a file has not been accessed within a duration of time.
There is existing work in this area at:
- https://github.com/Storj/bridge/pull/326
- https://github.com/Storj/service-auditor
This will also contribute to solving issues such as, as times that the file is stored can be recorded and used for payouts:
- https://github.com/Storj/bridge/issues/389 | reli | integrate and deploy auditing worker while the farmer uptime monitor will catch many farmers that go offline there also needs to be a process that will make sure that files still exist if the files have not been retrieved successfully recently these audits may not need to be an a predefined schedule but would be triggered when a file has not been accessed within a duration of time there is existing work in this area at this will also contribute to solving issues such as as times that the file is stored can be recorded and used for payouts | 1 |
43,487 | 7,047,923,058 | IssuesEvent | 2018-01-02 15:37:17 | tapestry-cloud/tapestry | https://api.github.com/repos/tapestry-cloud/tapestry | closed | Potentially add a basic serve command | add-documentation enhancement | Basically a wrapper around `php -S 127.0.0.1:3000 -t build_local`. | 1.0 | Potentially add a basic serve command - Basically a wrapper around `php -S 127.0.0.1:3000 -t build_local`. | non_reli | potentially add a basic serve command basically a wrapper around php s t build local | 0 |
216,318 | 16,655,974,831 | IssuesEvent | 2021-06-05 14:31:54 | battjt/j1939-84 | https://api.github.com/repos/battjt/j1939-84 | opened | Exclusion List Escape Observed | NOxBinGHG User documentation future enhancement | Exclusion List Escape Observed.
Was SP 12691 expected in exclusion list? Or was 12691 added to DM24 in error, as there was to be one representative for all GHG?
DM24 from Engine #2 (1): [
D F T F
a r e F
t F s l
a r t n SPN — SP Name
----------------------
D F 2 SPN 27 - Engine EGR 1 Valve Position
...
D 4 SPN 12675 - NOx Tracking Engine Activity Lifetime Fuel Consumption Bin 1 (Total)
D 1 SPN 12691 - GHG Tracking Lifetime Active Technology Index
D 4 SPN 12730 - GHG Tracking Lifetime Engine Run Time
SPN 12675 is supported by Engine #2 (1) but will be omitted
SPN 12730 is supported by Engine #2 (1) but will be omitted
13:35:08.3803 DS Request for PGN 64257 to Engine #2 (1) for SPNs 12691
13:35:08.3818 18EA01F9 [3] 01 FB 00 (TX)
13:35:08.4743 18FB0101 [28] FA 00 00 00 00 00 00 00 00 F9 00 00 00 00 00 00 00 00 F8 00 00 00 00 00 00 00 00 00
Green House Gas Lifetime Active Technology Tracking from Engine #2 (1):
SPN 12691, GHG Tracking Lifetime Active Technology Index: 11111010
SPN 12692, GHG Tracking Lifetime Active Technology Time: 0.000000 s
SPN 12693, GHG Tracking Lifetime Active Technology Vehicle Distance: 0.000000 m
Add 12691 and similar SPs to exclusion list. SP 12730 should trigger this query in the planned discrete tests. | 1.0 | Exclusion List Escape Observed - Exclusion List Escape Observed.
Was SP 12691 expected in exclusion list? Or was 12691 added to DM24 in error, as there was to be one representative for all GHG?
DM24 from Engine #2 (1): [
D F T F
a r e F
t F s l
a r t n SPN — SP Name
----------------------
D F 2 SPN 27 - Engine EGR 1 Valve Position
...
D 4 SPN 12675 - NOx Tracking Engine Activity Lifetime Fuel Consumption Bin 1 (Total)
D 1 SPN 12691 - GHG Tracking Lifetime Active Technology Index
D 4 SPN 12730 - GHG Tracking Lifetime Engine Run Time
SPN 12675 is supported by Engine #2 (1) but will be omitted
SPN 12730 is supported by Engine #2 (1) but will be omitted
13:35:08.3803 DS Request for PGN 64257 to Engine #2 (1) for SPNs 12691
13:35:08.3818 18EA01F9 [3] 01 FB 00 (TX)
13:35:08.4743 18FB0101 [28] FA 00 00 00 00 00 00 00 00 F9 00 00 00 00 00 00 00 00 F8 00 00 00 00 00 00 00 00 00
Green House Gas Lifetime Active Technology Tracking from Engine #2 (1):
SPN 12691, GHG Tracking Lifetime Active Technology Index: 11111010
SPN 12692, GHG Tracking Lifetime Active Technology Time: 0.000000 s
SPN 12693, GHG Tracking Lifetime Active Technology Vehicle Distance: 0.000000 m
Add 12691 and similar SPs to exclusion list. SP 12730 should trigger this query in the planned discrete tests. | non_reli | exclusion list escape observed exclusion list escape observed was sp expected in exclusion list or was added to in error as there was to be one representative for all ghg from engine d f t f a r e f t f s l a r t n spn — sp name d f spn engine egr valve position d spn nox tracking engine activity lifetime fuel consumption bin total d spn ghg tracking lifetime active technology index d spn ghg tracking lifetime engine run time spn is supported by engine but will be omitted spn is supported by engine but will be omitted ds request for pgn to engine for spns fb tx fa green house gas lifetime active technology tracking from engine spn ghg tracking lifetime active technology index spn ghg tracking lifetime active technology time s spn ghg tracking lifetime active technology vehicle distance m add and similar sps to exclusion list sp should trigger this query in the planned discrete tests | 0 |
137 | 4,163,864,769 | IssuesEvent | 2016-06-18 11:35:18 | Bubbus/ACF-Missiles | https://api.github.com/repos/Bubbus/ACF-Missiles | closed | Missile guidance code | enhancement reliability | It is known that missiles tend to miss fast-moving targets a lot, even when they are perfectly able to hit them both propellant and speed wise. This makes anti-air missiles unreliable, forcing players to fire multiple times in order to destroy their target.
To compensate for this, I am currently reworking the guidance code and testing a new approach to it. The new system looks promising in theory, and works flawlessly in an E2 environment, however I didn't test it on ACF missiles yet. This change may also require the flight code to be modified, and while this will affect all missiles, I will try to keep the unique behaviour of each of them unchanged.
I'll keep this thread updated as I make progress. | True | Missile guidance code - It is known that missiles tend to miss fast-moving targets a lot, even when they are perfectly able to hit them both propellant and speed wise. This makes anti-air missiles unreliable, forcing players to fire multiple times in order to destroy their target.
To compensate for this, I am currently reworking the guidance code and testing a new approach to it. The new system looks promising in theory, and works flawlessly in an E2 environment, however I didn't test it on ACF missiles yet. This change may also require the flight code to be modified, and while this will affect all missiles, I will try to keep the unique behaviour of each of them unchanged.
I'll keep this thread updated as I make progress. | reli | missile guidance code it is known that missiles tend to miss fast moving targets a lot even when they are perfectly able to hit them both propellant and speed wise this makes anti air missiles unreliable forcing players to fire multiple times in order to destroy their target to compensate for this i am currently reworking the guidance code and testing a new approach to it the new system looks promising in theory and works flawlessly in an environment however i didn t test it on acf missiles yet this change may also require the flight code to be modified and while this will affect all missiles i will try to keep the unique behaviour of each of them unchanged i ll keep this thread updated as i make progress | 1 |