Technical Debt and it Types Datasets
Collection
24 items
•
Updated
•
1
Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
86
| repo_url
stringlengths 36
115
| action
stringclasses 3
values | title
stringlengths 1
459
| labels
stringlengths 4
360
| body
stringlengths 3
232k
| index
stringclasses 8
values | text_combine
stringlengths 96
232k
| label
stringclasses 2
values | text
stringlengths 96
212k
| binary_label
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
84,475 | 7,922,958,787 | IssuesEvent | 2018-07-05 12:41:05 | palantir/atlasdb | https://api.github.com/repos/palantir/atlasdb | opened | Poor test coverage: SweepStatsKeyValueService | component: testing priority: P2 small | Noticed that SweepStatsKeyValueService doesn't have many tests. I started adding them as I thought I was going to change behaviour as part of fixing #3226, but went another way.
Pushed up a branch, [test/sweep-stats-kvs](https://github.com/palantir/atlasdb/tree/test/sweep-stats-kvs), with the work done so far, but I probably won't continue the testing work now - so filing for tracking. | 1.0 | Poor test coverage: SweepStatsKeyValueService - Noticed that SweepStatsKeyValueService doesn't have many tests. I started adding them as I thought I was going to change behaviour as part of fixing #3226, but went another way.
Pushed up a branch, [test/sweep-stats-kvs](https://github.com/palantir/atlasdb/tree/test/sweep-stats-kvs), with the work done so far, but I probably won't continue the testing work now - so filing for tracking. | non_requirement | poor test coverage sweepstatskeyvalueservice noticed that sweepstatskeyvalueservice doesn t have many tests i started adding them as i thought i was going to change behaviour as part of fixing but went another way pushed up a branch with the work done so far but i probably won t continue the testing work now so filing for tracking | 0 |
172,924 | 6,518,006,897 | IssuesEvent | 2017-08-28 05:26:11 | arquillian/smart-testing | https://api.github.com/repos/arquillian/smart-testing | closed | Ordering process should take into account previous configuration/strategies | Component: Core Priority: Medium Status: In Progress Status: Ready Type: Bug | `Test` - normal test
`CTest` - Changed test
`FTest` - Failed test
`CFTest` - changed and failed test
Consider this situation:
we use `-Dsmart.testing=changed, failed`
and we get this list of test classes to reorder:
`Test1, CTest1, CFTest1, FTest1, CTest2, Test2, CTest3, CFTest2`
After applying only the `changed` strategy, then the resulting set would be:
`CTest1, CFTest1, CTest2, CTest3, CFTest2, Test1, FTest1, Test2`
When we apply both strategies `changed, failed`, then the resulting set contains:
`CTest1, CFTest1, CTest2, CTest3, CFTest2, FTest1, Test1, Test2`
Which means that only `FTest1` is moved
What I would expect is that after the first ordering, the classes would be divided into two sets - the one containing classes that fall into the condition of the strategy and the rest - in the case of `changed`:
`[ CTest1, CFTest1, CTest2, CTest3, CFTest2 ] [ Test1, FTest1, Test2 ]`
Then the ordering process for the second strategy would be applied on bots sets separately:
`[ [ CFTest1, CFTest2] [ CTest1, CTest2, CTest3 ] ] [ [ FTest1 ] [ Test1, Test2 ] ]`
So the very first classes that will be run are those that fall into the most strategies.
In an analogical way, I would take into account `runOrder` parameter in Surefire configuration. In case that user uses `alphabetical` order and for smart testing the strategy `new` then all classes in both sets should be in the alphabetical order:
`[ ANewTest, BNewTest, CNewTest ] [ ATest, BTest ]`
| 1.0 | Ordering process should take into account previous configuration/strategies - `Test` - normal test
`CTest` - Changed test
`FTest` - Failed test
`CFTest` - changed and failed test
Consider this situation:
we use `-Dsmart.testing=changed, failed`
and we get this list of test classes to reorder:
`Test1, CTest1, CFTest1, FTest1, CTest2, Test2, CTest3, CFTest2`
After applying only the `changed` strategy, then the resulting set would be:
`CTest1, CFTest1, CTest2, CTest3, CFTest2, Test1, FTest1, Test2`
When we apply both strategies `changed, failed`, then the resulting set contains:
`CTest1, CFTest1, CTest2, CTest3, CFTest2, FTest1, Test1, Test2`
Which means that only `FTest1` is moved
What I would expect is that after the first ordering, the classes would be divided into two sets - the one containing classes that fall into the condition of the strategy and the rest - in the case of `changed`:
`[ CTest1, CFTest1, CTest2, CTest3, CFTest2 ] [ Test1, FTest1, Test2 ]`
Then the ordering process for the second strategy would be applied on bots sets separately:
`[ [ CFTest1, CFTest2] [ CTest1, CTest2, CTest3 ] ] [ [ FTest1 ] [ Test1, Test2 ] ]`
So the very first classes that will be run are those that fall into the most strategies.
In an analogical way, I would take into account `runOrder` parameter in Surefire configuration. In case that user uses `alphabetical` order and for smart testing the strategy `new` then all classes in both sets should be in the alphabetical order:
`[ ANewTest, BNewTest, CNewTest ] [ ATest, BTest ]`
| non_requirement | ordering process should take into account previous configuration strategies test normal test ctest changed test ftest failed test cftest changed and failed test consider this situation we use dsmart testing changed failed and we get this list of test classes to reorder after applying only the changed strategy then the resulting set would be when we apply both strategies changed failed then the resulting set contains which means that only is moved what i would expect is that after the first ordering the classes would be divided into two sets the one containing classes that fall into the condition of the strategy and the rest in the case of changed then the ordering process for the second strategy would be applied on bots sets separately so the very first classes that will be run are those that fall into the most strategies in an analogical way i would take into account runorder parameter in surefire configuration in case that user uses alphabetical order and for smart testing the strategy new then all classes in both sets should be in the alphabetical order | 0 |
440,907 | 12,706,091,884 | IssuesEvent | 2020-06-23 06:27:40 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Special characters in username and password | Affected/3.0.0 Priority/Highest Type/Improvement | **version**
3.0.0
**Steps to reproduce**
1. Setup username "<name" as super_admin username as below [1].
```
[super_admin]
username = "<![CDATA[<name]]>"
password = "password"
create_admin_account = true
[apim.throttling.jms]
username = "%3Cname"
password = "password"
```
2. login to publisher with above credentials.
3. Error - `The server could not verify that you are authorized to access the requested resource.`.
**Query**
Provide restricted character list and expected formats for username and password.
[1] https://apim.docs.wso2.com/en/latest/Administer/ProductSecurity/LoginsAndPasswords/maintaining-logins-and-passwords/#changing-the-super-admin-credentials | 1.0 | Special characters in username and password - **version**
3.0.0
**Steps to reproduce**
1. Setup username "<name" as super_admin username as below [1].
```
[super_admin]
username = "<![CDATA[<name]]>"
password = "password"
create_admin_account = true
[apim.throttling.jms]
username = "%3Cname"
password = "password"
```
2. login to publisher with above credentials.
3. Error - `The server could not verify that you are authorized to access the requested resource.`.
**Query**
Provide restricted character list and expected formats for username and password.
[1] https://apim.docs.wso2.com/en/latest/Administer/ProductSecurity/LoginsAndPasswords/maintaining-logins-and-passwords/#changing-the-super-admin-credentials | non_requirement | special characters in username and password version steps to reproduce setup username name as super admin username as below username password password create admin account true username password password login to publisher with above credentials error the server could not verify that you are authorized to access the requested resource query provide restricted character list and expected formats for username and password | 0 |
792,624 | 27,968,431,271 | IssuesEvent | 2023-03-24 22:08:49 | MetaMask/metamask-mobile | https://api.github.com/repos/MetaMask/metamask-mobile | closed | Rejecting ETH Nano app install crash | type-bug Priority - High team-key-management Ledger | **Describe the bug**
_A clear and concise description of what the bug is_
**Screenshots**
_If applicable, add screenshots or links to help explain your problem_
**To Reproduce**
_Steps to reproduce the behavior_
1. open uniswap in MM browser
2. attempted to swap ETH for apecoin
3. Eth app not installed on Nano
4. Saw screen to inform me to install app :white_check_mark:
5. Clicked “reject”
6. Crash
**Expected behavior**
_A clear and concise description of what you expected to happen_
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- App Version [e.g. 1.0.0] - find version number in app from Settings > About MetaMask
-------------------------------------------------------------
_to be added after bug submission by internal support / PM_
**Severity**
- How critical is the impact of this bug on a user?
- Add stats if available on % of customers impacted
- Is this visible to all users?
- Is this tech debt?
| 1.0 | Rejecting ETH Nano app install crash - **Describe the bug**
_A clear and concise description of what the bug is_
**Screenshots**
_If applicable, add screenshots or links to help explain your problem_
**To Reproduce**
_Steps to reproduce the behavior_
1. open uniswap in MM browser
2. attempted to swap ETH for apecoin
3. Eth app not installed on Nano
4. Saw screen to inform me to install app :white_check_mark:
5. Clicked “reject”
6. Crash
**Expected behavior**
_A clear and concise description of what you expected to happen_
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- App Version [e.g. 1.0.0] - find version number in app from Settings > About MetaMask
-------------------------------------------------------------
_to be added after bug submission by internal support / PM_
**Severity**
- How critical is the impact of this bug on a user?
- Add stats if available on % of customers impacted
- Is this visible to all users?
- Is this tech debt?
| non_requirement | rejecting eth nano app install crash describe the bug a clear and concise description of what the bug is screenshots if applicable add screenshots or links to help explain your problem to reproduce steps to reproduce the behavior open uniswap in mm browser attempted to swap eth for apecoin eth app not installed on nano saw screen to inform me to install app white check mark clicked “reject” crash expected behavior a clear and concise description of what you expected to happen smartphone please complete the following information device os app version find version number in app from settings about metamask to be added after bug submission by internal support pm severity how critical is the impact of this bug on a user add stats if available on of customers impacted is this visible to all users is this tech debt | 0 |
449,067 | 31,826,749,746 | IssuesEvent | 2023-09-14 08:02:13 | hyperledger/iroha-2-docs | https://api.github.com/repos/hyperledger/iroha-2-docs | closed | Validate anchor links | documentation enhancement iroha2 | It would be good if our documentation fails to build in case if we have broken anchor links between or within pages.
It is currently not easy (or even possible) to implement it using existing Vitepress API, so I created an upstream issue:
- ~~https://github.com/vuejs/vitepress/issues/2176~~ https://github.com/vuejs/vitepress/issues/646 | 1.0 | Validate anchor links - It would be good if our documentation fails to build in case if we have broken anchor links between or within pages.
It is currently not easy (or even possible) to implement it using existing Vitepress API, so I created an upstream issue:
- ~~https://github.com/vuejs/vitepress/issues/2176~~ https://github.com/vuejs/vitepress/issues/646 | non_requirement | validate anchor links it would be good if our documentation fails to build in case if we have broken anchor links between or within pages it is currently not easy or even possible to implement it using existing vitepress api so i created an upstream issue | 0 |
5,193 | 7,738,939,268 | IssuesEvent | 2018-05-28 13:54:18 | Hajeong-Noh/20180505_AWT-project | https://api.github.com/repos/Hajeong-Noh/20180505_AWT-project | closed | Selecting a peak in the 2d or | Clarify Requirements | 3d map shows all the data about the
peak and a form for editing peak data." means that the worker can edit the peak data or just annotations? | 1.0 | Selecting a peak in the 2d or - 3d map shows all the data about the
peak and a form for editing peak data." means that the worker can edit the peak data or just annotations? | requirement | selecting a peak in the or map shows all the data about the peak and a form for editing peak data means that the worker can edit the peak data or just annotations | 1 |
43,390 | 2,889,549,823 | IssuesEvent | 2015-06-13 14:42:11 | canadainc/quran10 | https://api.github.com/repos/canadainc/quran10 | closed | Implement Duplicate Tafsir from English | Admin Component-Logic Component-UI Fixed Maintainability Priority-High Type-Enhancement Usability Verified | Allow a tafsir to be duplicated from the English version. | 1.0 | Implement Duplicate Tafsir from English - Allow a tafsir to be duplicated from the English version. | non_requirement | implement duplicate tafsir from english allow a tafsir to be duplicated from the english version | 0 |
300,690 | 9,211,708,469 | IssuesEvent | 2019-03-09 17:37:53 | CMPUT301W19T11/Atheneum | https://api.github.com/repos/CMPUT301W19T11/Atheneum | closed | Refactor View Profile | high priority | - remove the profile thing from the main navbar
- when you click the current's user picture, take them to a new activity to view
- have the edit button take to the edit activity
- search takes to another activity to view profile
- maybe send the userid to view profile instead of the user, i find that the user crashes when serialized | 1.0 | Refactor View Profile - - remove the profile thing from the main navbar
- when you click the current's user picture, take them to a new activity to view
- have the edit button take to the edit activity
- search takes to another activity to view profile
- maybe send the userid to view profile instead of the user, i find that the user crashes when serialized | non_requirement | refactor view profile remove the profile thing from the main navbar when you click the current s user picture take them to a new activity to view have the edit button take to the edit activity search takes to another activity to view profile maybe send the userid to view profile instead of the user i find that the user crashes when serialized | 0 |
9,710 | 13,796,041,175 | IssuesEvent | 2020-10-09 19:06:57 | CMPUT301F20T01/Bookmark | https://api.github.com/repos/CMPUT301F20T01/Bookmark | opened | 02.01.01: Contents of user profile | requirement | ### Rationale
User profile contains a unique username and the user's contact information (phone number, email).
#### Story Points
3
#### Risk Level
medium
#### User Story
US 02.01.01
As an owner or borrower, I want a profile with a unique username and my contact information.
| 1.0 | 02.01.01: Contents of user profile - ### Rationale
User profile contains a unique username and the user's contact information (phone number, email).
#### Story Points
3
#### Risk Level
medium
#### User Story
US 02.01.01
As an owner or borrower, I want a profile with a unique username and my contact information.
| requirement | contents of user profile rationale user profile contains a unique username and the user s contact information phone number email story points risk level medium user story us as an owner or borrower i want a profile with a unique username and my contact information | 1 |
6,228 | 8,923,592,260 | IssuesEvent | 2019-01-21 16:02:56 | gnosis/dex-contracts | https://api.github.com/repos/gnosis/dex-contracts | closed | State Transition to emit slot | requirement | Since the deposits, withdraws and auctions all lie in a current slot, the Event listener would need to know (not only which type of state transition occurred), but also which slot was applied.
This would imply the inclusion of `uint slot` as part of the StateTransition Event. | 1.0 | State Transition to emit slot - Since the deposits, withdraws and auctions all lie in a current slot, the Event listener would need to know (not only which type of state transition occurred), but also which slot was applied.
This would imply the inclusion of `uint slot` as part of the StateTransition Event. | requirement | state transition to emit slot since the deposits withdraws and auctions all lie in a current slot the event listener would need to know not only which type of state transition occurred but also which slot was applied this would imply the inclusion of uint slot as part of the statetransition event | 1 |
11,436 | 17,112,416,746 | IssuesEvent | 2021-07-10 15:55:49 | wfknowles/taskinator | https://api.github.com/repos/wfknowles/taskinator | closed | Change Task | requirement | * Add two status task lists
* Add buttons to change tasks
* Delete a task
* Edit a task
* Add drop down menu to change task status
* Move task based on status task list | 1.0 | Change Task - * Add two status task lists
* Add buttons to change tasks
* Delete a task
* Edit a task
* Add drop down menu to change task status
* Move task based on status task list | requirement | change task add two status task lists add buttons to change tasks delete a task edit a task add drop down menu to change task status move task based on status task list | 1 |
66,015 | 12,702,827,039 | IssuesEvent | 2020-06-22 20:57:23 | sourcegraph/sourcegraph | https://api.github.com/repos/sourcegraph/sourcegraph | opened | Design build step configuration in auto-indexer | team/code-intelligence | The auto indexer does not currently allow any configurable build steps. This is required for repos that require code generation steps (e.g. protobuf that's not checked in) and for repos that need special dependency resolution.
Write an RFC to show how this would be configured and run. | 1.0 | Design build step configuration in auto-indexer - The auto indexer does not currently allow any configurable build steps. This is required for repos that require code generation steps (e.g. protobuf that's not checked in) and for repos that need special dependency resolution.
Write an RFC to show how this would be configured and run. | non_requirement | design build step configuration in auto indexer the auto indexer does not currently allow any configurable build steps this is required for repos that require code generation steps e g protobuf that s not checked in and for repos that need special dependency resolution write an rfc to show how this would be configured and run | 0 |
371,709 | 10,980,652,211 | IssuesEvent | 2019-11-30 15:57:41 | T4g1/jamcraft4 | https://api.github.com/repos/T4g1/jamcraft4 | closed | Game objective - Game name | fixed priority: high | Objective of the game? When is it won/lost
What happens on death?
What is the game called? | 1.0 | Game objective - Game name - Objective of the game? When is it won/lost
What happens on death?
What is the game called? | non_requirement | game objective game name objective of the game when is it won lost what happens on death what is the game called | 0 |
7,582 | 10,721,833,954 | IssuesEvent | 2019-10-27 06:52:07 | Groove-Theory/Groovebot | https://api.github.com/repos/Groove-Theory/Groovebot | closed | Groove Points | enhancement large project requirements needed | Groove Points!!
1. What do we award this for? And how?
2. Per guild? Globally?
3. Leaderboards?
4. Store in a new Collection, not in GameData
Am I just making a starboard?
By message count? I could do that, since ChannelListener detects that anyway.
g!givepoint or something to award .... other forms of credits
At some point this will have to relate to the new website | 1.0 | Groove Points - Groove Points!!
1. What do we award this for? And how?
2. Per guild? Globally?
3. Leaderboards?
4. Store in a new Collection, not in GameData
Am I just making a starboard?
By message count? I could do that, since ChannelListener detects that anyway.
g!givepoint or something to award .... other forms of credits
At some point this will have to relate to the new website | requirement | groove points groove points what do we award this for and how per guild globally leaderboards store in a new collection not in gamedata am i just making a starboard by message count i could do that since channellistener detects that anyway g givepoint or something to award other forms of credits at some point this will have to relate to the new website | 1 |
60,764 | 8,461,218,227 | IssuesEvent | 2018-10-22 21:07:31 | droidkfx/Yet-Another-Productivity-App | https://api.github.com/repos/droidkfx/Yet-Another-Productivity-App | closed | document new features | documentation | - [ ] Logout needs to be documented
- [ ] Document update to yapa-api - added authentication, bugfix multithread access to task | 1.0 | document new features - - [ ] Logout needs to be documented
- [ ] Document update to yapa-api - added authentication, bugfix multithread access to task | non_requirement | document new features logout needs to be documented document update to yapa api added authentication bugfix multithread access to task | 0 |
7,440 | 10,660,845,130 | IssuesEvent | 2019-10-18 10:51:53 | LucaFalasca/Dispensa | https://api.github.com/repos/LucaFalasca/Dispensa | opened | Adding the missing ingredients | Fuctional Requirements | The system shall add the missing ingredients from a recipe on the shopping list, if the user want | 1.0 | Adding the missing ingredients - The system shall add the missing ingredients from a recipe on the shopping list, if the user want | requirement | adding the missing ingredients the system shall add the missing ingredients from a recipe on the shopping list if the user want | 1 |
34,954 | 2,789,537,220 | IssuesEvent | 2015-05-08 19:56:53 | nprapps/lookatthis | https://api.github.com/repos/nprapps/lookatthis | closed | ios orientation change bug | Priority: Normal | switching from portrait to landscape causes the background image to drift outside the viewport. | 1.0 | ios orientation change bug - switching from portrait to landscape causes the background image to drift outside the viewport. | non_requirement | ios orientation change bug switching from portrait to landscape causes the background image to drift outside the viewport | 0 |
10,499 | 15,257,270,327 | IssuesEvent | 2021-02-21 00:18:22 | CMPUT301W21T18/PreemptiveOOP | https://api.github.com/repos/CMPUT301W21T18/PreemptiveOOP | opened | US 01.03.01 - END an Experiment | Basic Requirement | **User story**
US 01.03.01
As an owner, I want to end an experiment. This leaves the results available and public but does not allow new results to be added.
**Rationale**
- Owner should has right to terminate an experiment, the result can be still available to see, but experimenters cannot add new trials to the experiment.
**Story point = 3/5
Risk level = Medium**
| 1.0 | US 01.03.01 - END an Experiment - **User story**
US 01.03.01
As an owner, I want to end an experiment. This leaves the results available and public but does not allow new results to be added.
**Rationale**
- Owner should has right to terminate an experiment, the result can be still available to see, but experimenters cannot add new trials to the experiment.
**Story point = 3/5
Risk level = Medium**
| requirement | us end an experiment user story us as an owner i want to end an experiment this leaves the results available and public but does not allow new results to be added rationale owner should has right to terminate an experiment the result can be still available to see but experimenters cannot add new trials to the experiment story point risk level medium | 1 |
2,412 | 4,789,687,083 | IssuesEvent | 2016-10-31 03:18:24 | jobiols/cursos | https://api.github.com/repos/jobiols/cursos | opened | crear modulo para manejo de tarjetas crédito / debito | requirement | - Carga de comisiones de cada tarjeta
- Calculo de la comision del banco a agregar
- Calculo del valor de la cuota que paga el cliente (para que el cliente elija financiacion)
- Carga de cupones para conciliacion
- Importación de datos de liquidaciones FirstData, Visa, etc.
- Configuración de recargo por tarjeta | 1.0 | crear modulo para manejo de tarjetas crédito / debito - - Carga de comisiones de cada tarjeta
- Calculo de la comision del banco a agregar
- Calculo del valor de la cuota que paga el cliente (para que el cliente elija financiacion)
- Carga de cupones para conciliacion
- Importación de datos de liquidaciones FirstData, Visa, etc.
- Configuración de recargo por tarjeta | requirement | crear modulo para manejo de tarjetas crédito debito carga de comisiones de cada tarjeta calculo de la comision del banco a agregar calculo del valor de la cuota que paga el cliente para que el cliente elija financiacion carga de cupones para conciliacion importación de datos de liquidaciones firstdata visa etc configuración de recargo por tarjeta | 1 |
3,058 | 5,435,713,351 | IssuesEvent | 2017-03-05 19:19:05 | jau35/CS451 | https://api.github.com/repos/jau35/CS451 | opened | R3.1.3.2.2 | requirement | The user must press the mouse down on a piece, drag the piece to where it needs to be moved, and release the mouse in order to move the piece. | 1.0 | R3.1.3.2.2 - The user must press the mouse down on a piece, drag the piece to where it needs to be moved, and release the mouse in order to move the piece. | requirement | the user must press the mouse down on a piece drag the piece to where it needs to be moved and release the mouse in order to move the piece | 1 |
2,209 | 4,569,120,671 | IssuesEvent | 2016-09-15 16:16:50 | ngageoint/hootenanny-ui | https://api.github.com/repos/ngageoint/hootenanny-ui | closed | Hoot not using full real estate | Category: UI Priority: Medium Status: In Progress Status: Ready for Test Type: Bug Type: Requirement | 1. Open Chrome
2. Snap Chrome to left or right of desktop so that it fills half the desktop real estate
3. Open Hoot
4. Maximize Chrome

| 1.0 | Hoot not using full real estate - 1. Open Chrome
2. Snap Chrome to left or right of desktop so that it fills half the desktop real estate
3. Open Hoot
4. Maximize Chrome

| requirement | hoot not using full real estate open chrome snap chrome to left or right of desktop so that it fills half the desktop real estate open hoot maximize chrome | 1 |
175,122 | 6,547,228,387 | IssuesEvent | 2017-09-04 13:54:23 | dwyl/hq | https://api.github.com/repos/dwyl/hq | opened | VAT Return | August - October 2017 | dependency finance priority-4 | VAT return for the period of 01.08.2017 - 31.10.2017.
**Deadline:** 1st December.
VAT return and payment deadline is 7th November but this task needs to be completed ahead of time as bank funds to HMRC take 2 days to clear so this period needs to be taken account of so that we don't have to overpay and wait for HMRC to return the difference once the return is finalised again.
Low priority whilst we are waiting for the period to finish, but will be revised in November | 1.0 | VAT Return | August - October 2017 - VAT return for the period of 01.08.2017 - 31.10.2017.
**Deadline:** 1st December.
VAT return and payment deadline is 7th November but this task needs to be completed ahead of time as bank funds to HMRC take 2 days to clear so this period needs to be taken account of so that we don't have to overpay and wait for HMRC to return the difference once the return is finalised again.
Low priority whilst we are waiting for the period to finish, but will be revised in November | non_requirement | vat return august october vat return for the period of deadline december vat return and payment deadline is november but this task needs to be completed ahead of time as bank funds to hmrc take days to clear so this period needs to be taken account of so that we don t have to overpay and wait for hmrc to return the difference once the return is finalised again low priority whilst we are waiting for the period to finish but will be revised in november | 0 |
18,720 | 5,696,667,261 | IssuesEvent | 2017-04-16 14:18:26 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0] 500 when loading com_associations | No Code Attached Yet | ### Steps to reproduce the issue
Multilingual site (Install languages via package install). 2 languages is enough.
Create items in both languages.
Go to `administrator/index.php?option=com_associations&view=associations`
Choose item and language
> 500 - An error has occurred.
> 'joomla40.a.alias' isn't in GROUP BY 'joomla40.a.alias' isn't in GROUP BY
| 1.0 | [4.0] 500 when loading com_associations - ### Steps to reproduce the issue
Multilingual site (Install languages via package install). 2 languages is enough.
Create items in both languages.
Go to `administrator/index.php?option=com_associations&view=associations`
Choose item and language
> 500 - An error has occurred.
> 'joomla40.a.alias' isn't in GROUP BY 'joomla40.a.alias' isn't in GROUP BY
| non_requirement | when loading com associations steps to reproduce the issue multilingual site install languages via package install languages is enough create items in both languages go to administrator index php option com associations view associations choose item and language an error has occurred a alias isn t in group by a alias isn t in group by | 0 |
316,587 | 27,168,409,394 | IssuesEvent | 2023-02-17 17:08:14 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Receitas - Dados das Receitas - Brumadinho | generalization test development template - ABO (21) tag - Receitas subtag - Dados das receitas | DoD: Realizar o teste de Generalização do validador da tag Receitas - Dados das Receitas para o Município de Brumadinho. | 1.0 | Teste de generalizacao para a tag Receitas - Dados das Receitas - Brumadinho - DoD: Realizar o teste de Generalização do validador da tag Receitas - Dados das Receitas para o Município de Brumadinho. | non_requirement | teste de generalizacao para a tag receitas dados das receitas brumadinho dod realizar o teste de generalização do validador da tag receitas dados das receitas para o município de brumadinho | 0 |
11,762 | 18,043,738,998 | IssuesEvent | 2021-09-18 14:09:19 | OffchainLabs/arb-token-bridge | https://api.github.com/repos/OffchainLabs/arb-token-bridge | closed | selected token shouldn't persist when switching between mainnet/testnet | bug new-ui new-ui-requirement | Should track a separate "current token" for mainnet and testnet, or even just clearing upon switch would be fine | 1.0 | selected token shouldn't persist when switching between mainnet/testnet - Should track a separate "current token" for mainnet and testnet, or even just clearing upon switch would be fine | requirement | selected token shouldn t persist when switching between mainnet testnet should track a separate current token for mainnet and testnet or even just clearing upon switch would be fine | 1 |
642,712 | 20,911,043,163 | IssuesEvent | 2022-03-24 09:20:50 | AY2122S2-CS2113T-T10-1/tp | https://api.github.com/repos/AY2122S2-CS2113T-T10-1/tp | opened | Add ActivityCommandParser class | priority.Medium backend.command backend.parser | Shift necessary attributes and methods from ActivityCreateCommand class to ActivityCreateCommandParser class | 1.0 | Add ActivityCommandParser class - Shift necessary attributes and methods from ActivityCreateCommand class to ActivityCreateCommandParser class | non_requirement | add activitycommandparser class shift necessary attributes and methods from activitycreatecommand class to activitycreatecommandparser class | 0 |
335,130 | 24,455,461,034 | IssuesEvent | 2022-10-07 06:09:56 | flame-engine/flame | https://api.github.com/repos/flame-engine/flame | closed | Generate diagrams in the docs from code instead of having static images | documentation | <!-- When reporting a improvement, please read this complete template and fill all the questions in order to get a better response -->
# What could be improved
We should generate the diagrams in the docs instead of using static images
# Why should this be improved
It improves the ability to see the changes in version control and it becomes much easier to change for the people that don't sit on the source files for the images.
# Any risks?
The images might look uglier.
# More information
<!-- Do you have any other useful information about this improvement report? Please write it down here -->
<!-- Possible helpful information: references to other sites/repositories -->
<!-- Are you interested in working on a PR for this? -->
| 1.0 | Generate diagrams in the docs from code instead of having static images - <!-- When reporting a improvement, please read this complete template and fill all the questions in order to get a better response -->
# What could be improved
We should generate the diagrams in the docs instead of using static images
# Why should this be improved
It improves the ability to see the changes in version control and it becomes much easier to change for the people that don't sit on the source files for the images.
# Any risks?
The images might look uglier.
# More information
<!-- Do you have any other useful information about this improvement report? Please write it down here -->
<!-- Possible helpful information: references to other sites/repositories -->
<!-- Are you interested in working on a PR for this? -->
| non_requirement | generate diagrams in the docs from code instead of having static images what could be improved we should generate the diagrams in the docs instead of using static images why should this be improved it improves the ability to see the changes in version control and it becomes much easier to change for the people that don t sit on the source files for the images any risks the images might look uglier more information | 0 |
12,038 | 18,792,036,371 | IssuesEvent | 2021-11-08 17:48:35 | Green-Software-Foundation/software_carbon_intensity | https://api.github.com/repos/Green-Software-Foundation/software_carbon_intensity | closed | Site reliability engineering principles and impact of SCI on them | requirements-constraints | **What is SLI and SLO?**
Reliability is the concept of a system having the quality of being trustworthy or of performing consistently well. Today most applications have a Service level objective (SLO) to facilitate monitoring. SLOs have typically 3 constraints
1) Service level indicator or SLI.
2) The target metric or Objective in percentile
3) The observation window
SLI + Objective +Observation window = SLO
SLIs have an event, a success criterion, and specify where and how you record success or failure. It is specified as the proportion of events that were good.
Example: Percentage of Query requests that return an error code other than 5XX or a timeout measured on the Query Gateway Frontend
SLIs exist to help engineering teams make better decisions. Your SLO performance is critical information to have when you’re making decisions about how hard and fast you can push your systems. SLOs are also important data points for other engineers when they’re making assumptions about their dependencies on your service or system. Lastly, your larger organization should use your SLIs and SLOs to make informed decisions about investment levels and about balancing reliability work against engineering velocity.
Availability SLO example
• 90% of HTTP requests as reported by the load balancer succeeded in the last 30 day window ( _Here 90% HTTP requests succeeded is the SLI , Return error code other than 5xx or timeout is the Objective and 30 days is the observation window_)
Latency SLO examples:
• 100% of requests in the last 5 minutes as measured at load balancer are served in less than 900ms
• 99.99% of requests in the last 5 minutes as measured at load balancer are served in less than 500ms
• 90% of requests in the last 5 minutes as measured at load balancer are served in less than 200ms
**How organizations can re-define SRE considering SCI.**
Software carbon intensity is a relative score between applications to be more carbon efficient, carbon aware and hardware efficient.
SCI for applications (CI)= (Energy used by the application * Located based marginal carbon emissions) + Embodied Carbon per baseline
Here the baseline is per API call, per additional user, per additional ML job etc.
From an SRE principle and alignment then, organizations would like to track carbon emissions from their applications using the SCI score and raise alerts on breach, i.e. if it increases beyond a certain percentage . If an application has a SCI score of x, organizations would then track variance from this value and configure monitoring principles .
**How would you then correctly define the metric as per SRE principles?**
In the above formula for SCI, baseline is a key aspect. We will explain with an example where we are considering the baseline as "one instance of a batch job". The batch job is a piece of component within a larger "software" or "application" which could potentially be a web application workload with a batch job doing a long running business process that does not need user interactions
Let us assume that the SCI value of an batch job running in West Europe has a CI value of 100 kgCO2 per instance of Azure webjob. By an initial assumption, let us assume that the SLO for SCI has been defined as not more than 20% variance. If during the operating window of the job, the service cranks up and the carbon intensity increases to 121 KgCo2 , then an alert has to be signaled. However this is theoretical. We have to look at this increase in the context of so many factors like interplay of SCI increases with other SLOs like latency, performance etc; how the west Europe datacenter was powered (% of coal/renewables) during the time of heightened operation of this web job; inefficient threading and garbage collection practices that would have been in the code that surfaced during the peak operation etc.
When this incident (as per SRE principles, this is an incident that should be monitored and alerted like a Sev 1,2,3 incident) happens, there could be multiple tuning techniques. One tuning technique that comes to mind for this incident is to try moving the workload to a different datacenter that is better powered by renewables ( by calling Watt Time API) or shifting the workload to a different time of the day. These are techniques that need detailed and vetted data upfront for the "orchestration algorithm" to make dynamic decisions regarding moving the workload. However we can tell today that we do not have defined and foolproof information to tell how much do each of these tuning techniques will contribute to managing the increased carbon intensity . This data has to be collated and cross verified over a longer period of time going forward to come up with authentic deductions.
Hence for the initial version of the specification, I propose that we raise the level of abstraction for monitoring of SCI at an application level rather than the individual component. i.e We will keep the baseline for software carbon intensity at the "application level" rather than a batch job, ML job, API call etc.
Thus we can consider that the metric we will use for the site reliability engineer will be the Total carbon emissions (C) value. The formula for this metric is :C= O+ M where O= E*I
**Scope of Metric**
For this metric, the definition of scope around which SCI will operate is very important. Scope is the boundary area where we will apply the monitoring. Since we are talking about software, the boundary here is the software boundary as defined in the SCI specification.
However, we may not be able to apply this uniformly for all software. Software varies by architecture , environment, hosting type (dedicated infrastructure vs shared infrastructure vs serverless ) and the implementation of SRE monitoring for SCI varies by these factors .We will discuss these factors below:
1) Architecture of Software
Different application architectures need monitoring differently. Consider for example the following workloads:
1) Web based multi-tier application or Long running process deployed on either cloud or on-premise
2) Mobile app connecting to backend APIs on cloud or on-premise
3) Desktop app connecting to backend APIs on cloud or on-premise
4) AI based machine learning model experiments
5) Open Source or Closed Source Framework SDKs
6) Server less applications
From a pure monitoring perspective of the SRE metric , doing it on server based workloads in the above list may be the first step . For e.g Web based multi-tier applications have either Virtual machines or EC2s connecting to APIs and databases and hence we can monitor the Operational emissions of these server components. Similarly we can calculate the metric for backend server APIs and serverless components of the Mobile and desktop apps.
There would be challenges however in doing the same for desktop devices and mobile platforms as the emissions calculations would need to know at a rough level the total number of mobile devices or desktops , their types etc to calculate roughly the value for the desktop apps. Hence for the first release of specifications, we can propose that monitoring will be for a subset of the above workloads i.e for workloads which have majorly server components.
2) Hosting Infrastructure - Dedicated Vs Shared
Monitoring techniques will also vary based on the hosting mode for the software. For those with dedicated infrastructure, SCI will just be the sum total of Operational Emission values across the different layers. In the equation for SCI ( SCI per unit of baseline = (E*I) + M , the value of M does not make an impact when calculating delta carbon intensity = Current CI- Original CI since the hardware is exclusively reserved for the purpose of the said software. Hence the monitoring technique can potentially look for variances in the Operational Emissions value and variances of it to raise alert for the Site Reliability Engineer.
The situation is different when we consider Shared infrastructure servers, multi-tenant databases, SaaS software shared by multiple customers. Here multiple micro-services could share the same PaaS compute platforms and storage services which by design is carbon friendly. In these cases, the percentage of allocation of infrastructure is necessary information to be able to calculate the carbon intensity value for the specific customer software. Hence we need to include the Embodied Emissions (M) value in the monitoring metric .
3) Application Environment Types
The usage of the above SRE metric also changes by environment. This statement is tied to the Application architecture factor somewhat but broadly, the concept is that for the purpose of carbon tracking and monitoring, measurement should be done for all environments like development, testing, QA, Performance and Production. This is because the carbon emission of the software increases manifold for lower environments like development and QA for workloads like machine learning models.
Multiple iterations of running AI experiments in lower environments should be tracked for carbon emissions and hence the scope of the metric should be monitored at the environment level.
Similarly for the other common workload scenarios like web or desktop applications, multiple performance tests are executed to achieve the SLO targets for throughput and /or latency. Through the process of trying to achieve these targets, the compute and storage resources are used more intensively than it would be on a production environment. Hence tracking of the metric is recommended at Environment scope as well.
**<to be added>**
**Conclusion**
Some of the deductions we have made at the end of this article:
1) Total carbon emissions (C) is the metric we will monitor at an SRE level .
2) The application of this metric can be done at multiple scope levels - Environment , Hosting infrastructure type and Application architecture
3) In future iterations of the specification, work to be done to understand the SRE impact of the C metric for other SLO attributes like latency, availability. A brief write up below:
Availability SLOs: Availability SLOs can be met either by software changes and redundant application design patterns or hardware redundancy. However, in the most common of scenarios, it is met by having hot standby and/or warm/cold standby infrastructure configurations. This directly impacts the “Embodied carbon” co-efficient in the above equation and hence tradeoffs have to be defined between meeting Availability SLO and allowed variance in SCI.
Latency SLOs:
Meeting latency SLOs involves either increasing the compute power allocated to the workload, spending developer cycles to fix performance issues, allocating the workload to synchronous services rather than async services that can run in energy efficient time sand also scaling the hardware required. Hence attempting to meet aggressive latency SLOs involves impacting all the co-efficient of the above equation: carbon efficiency, carbon aware and hardware efficiency.
Hence from a specification point of view, the SCI score can be integrated into the SLO examples as follows
Availability SLO example with SCI
• 90% of HTTP requests as reported by the load balancer succeeded in the last 30 day window and ensuring that the overall SCI does not go higher than x%
Latency SLO example with SCI
• 100% of requests in the last 5 minutes as measured at load balancer are served in less than 900ms and ensuring that the overall SCI does not go higher than x%
**How can we monitor SCI impact?**
Performance tests are great ways to measure SCI impact on SRE. Today they are used primarily to see if the application meets Service level objectives. We can add a couple of addition of performance tests (not a lot as that would mean transferring the SCI from prod environment to performance environments and cycles) to monitor for performance and adjusting the performance goal downwards (mostly!) to ensure SCI variances are not breached.
| 1.0 | Site reliability engineering principles and impact of SCI on them - **What is SLI and SLO?**
Reliability is the concept of a system having the quality of being trustworthy or of performing consistently well. Today most applications have a Service level objective (SLO) to facilitate monitoring. SLOs have typically 3 constraints
1) Service level indicator or SLI.
2) The target metric or Objective in percentile
3) The observation window
SLI + Objective +Observation window = SLO
SLIs have an event, a success criterion, and specify where and how you record success or failure. It is specified as the proportion of events that were good.
Example: Percentage of Query requests that return an error code other than 5XX or a timeout measured on the Query Gateway Frontend
SLIs exist to help engineering teams make better decisions. Your SLO performance is critical information to have when you’re making decisions about how hard and fast you can push your systems. SLOs are also important data points for other engineers when they’re making assumptions about their dependencies on your service or system. Lastly, your larger organization should use your SLIs and SLOs to make informed decisions about investment levels and about balancing reliability work against engineering velocity.
Availability SLO example
• 90% of HTTP requests as reported by the load balancer succeeded in the last 30 day window ( _Here 90% HTTP requests succeeded is the SLI , Return error code other than 5xx or timeout is the Objective and 30 days is the observation window_)
Latency SLO examples:
• 100% of requests in the last 5 minutes as measured at load balancer are served in less than 900ms
• 99.99% of requests in the last 5 minutes as measured at load balancer are served in less than 500ms
• 90% of requests in the last 5 minutes as measured at load balancer are served in less than 200ms
**How organizations can re-define SRE considering SCI.**
Software carbon intensity is a relative score between applications to be more carbon efficient, carbon aware and hardware efficient.
SCI for applications (CI)= (Energy used by the application * Located based marginal carbon emissions) + Embodied Carbon per baseline
Here the baseline is per API call, per additional user, per additional ML job etc.
From an SRE principle and alignment then, organizations would like to track carbon emissions from their applications using the SCI score and raise alerts on breach, i.e. if it increases beyond a certain percentage . If an application has a SCI score of x, organizations would then track variance from this value and configure monitoring principles .
**How would you then correctly define the metric as per SRE principles?**
In the above formula for SCI, baseline is a key aspect. We will explain with an example where we are considering the baseline as "one instance of a batch job". The batch job is a piece of component within a larger "software" or "application" which could potentially be a web application workload with a batch job doing a long running business process that does not need user interactions
Let us assume that the SCI value of an batch job running in West Europe has a CI value of 100 kgCO2 per instance of Azure webjob. By an initial assumption, let us assume that the SLO for SCI has been defined as not more than 20% variance. If during the operating window of the job, the service cranks up and the carbon intensity increases to 121 KgCo2 , then an alert has to be signaled. However this is theoretical. We have to look at this increase in the context of so many factors like interplay of SCI increases with other SLOs like latency, performance etc; how the west Europe datacenter was powered (% of coal/renewables) during the time of heightened operation of this web job; inefficient threading and garbage collection practices that would have been in the code that surfaced during the peak operation etc.
When this incident (as per SRE principles, this is an incident that should be monitored and alerted like a Sev 1,2,3 incident) happens, there could be multiple tuning techniques. One tuning technique that comes to mind for this incident is to try moving the workload to a different datacenter that is better powered by renewables ( by calling Watt Time API) or shifting the workload to a different time of the day. These are techniques that need detailed and vetted data upfront for the "orchestration algorithm" to make dynamic decisions regarding moving the workload. However we can tell today that we do not have defined and foolproof information to tell how much do each of these tuning techniques will contribute to managing the increased carbon intensity . This data has to be collated and cross verified over a longer period of time going forward to come up with authentic deductions.
Hence for the initial version of the specification, I propose that we raise the level of abstraction for monitoring of SCI at an application level rather than the individual component. i.e We will keep the baseline for software carbon intensity at the "application level" rather than a batch job, ML job, API call etc.
Thus we can consider that the metric we will use for the site reliability engineer will be the Total carbon emissions (C) value. The formula for this metric is :C= O+ M where O= E*I
**Scope of Metric**
For this metric, the definition of scope around which SCI will operate is very important. Scope is the boundary area where we will apply the monitoring. Since we are talking about software, the boundary here is the software boundary as defined in the SCI specification.
However, we may not be able to apply this uniformly for all software. Software varies by architecture , environment, hosting type (dedicated infrastructure vs shared infrastructure vs serverless ) and the implementation of SRE monitoring for SCI varies by these factors .We will discuss these factors below:
1) Architecture of Software
Different application architectures need monitoring differently. Consider for example the following workloads:
1) Web based multi-tier application or Long running process deployed on either cloud or on-premise
2) Mobile app connecting to backend APIs on cloud or on-premise
3) Desktop app connecting to backend APIs on cloud or on-premise
4) AI based machine learning model experiments
5) Open Source or Closed Source Framework SDKs
6) Server less applications
From a pure monitoring perspective of the SRE metric , doing it on server based workloads in the above list may be the first step . For e.g Web based multi-tier applications have either Virtual machines or EC2s connecting to APIs and databases and hence we can monitor the Operational emissions of these server components. Similarly we can calculate the metric for backend server APIs and serverless components of the Mobile and desktop apps.
There would be challenges however in doing the same for desktop devices and mobile platforms as the emissions calculations would need to know at a rough level the total number of mobile devices or desktops , their types etc to calculate roughly the value for the desktop apps. Hence for the first release of specifications, we can propose that monitoring will be for a subset of the above workloads i.e for workloads which have majorly server components.
2) Hosting Infrastructure - Dedicated Vs Shared
Monitoring techniques will also vary based on the hosting mode for the software. For those with dedicated infrastructure, SCI will just be the sum total of Operational Emission values across the different layers. In the equation for SCI ( SCI per unit of baseline = (E*I) + M , the value of M does not make an impact when calculating delta carbon intensity = Current CI- Original CI since the hardware is exclusively reserved for the purpose of the said software. Hence the monitoring technique can potentially look for variances in the Operational Emissions value and variances of it to raise alert for the Site Reliability Engineer.
The situation is different when we consider Shared infrastructure servers, multi-tenant databases, SaaS software shared by multiple customers. Here multiple micro-services could share the same PaaS compute platforms and storage services which by design is carbon friendly. In these cases, the percentage of allocation of infrastructure is necessary information to be able to calculate the carbon intensity value for the specific customer software. Hence we need to include the Embodied Emissions (M) value in the monitoring metric .
3) Application Environment Types
The usage of the above SRE metric also changes by environment. This statement is tied to the Application architecture factor somewhat but broadly, the concept is that for the purpose of carbon tracking and monitoring, measurement should be done for all environments like development, testing, QA, Performance and Production. This is because the carbon emission of the software increases manifold for lower environments like development and QA for workloads like machine learning models.
Multiple iterations of running AI experiments in lower environments should be tracked for carbon emissions and hence the scope of the metric should be monitored at the environment level.
Similarly for the other common workload scenarios like web or desktop applications, multiple performance tests are executed to achieve the SLO targets for throughput and /or latency. Through the process of trying to achieve these targets, the compute and storage resources are used more intensively than it would be on a production environment. Hence tracking of the metric is recommended at Environment scope as well.
**<to be added>**
**Conclusion**
Some of the deductions we have made at the end of this article:
1) Total carbon emissions (C) is the metric we will monitor at an SRE level .
2) The application of this metric can be done at multiple scope levels - Environment , Hosting infrastructure type and Application architecture
3) In future iterations of the specification, work to be done to understand the SRE impact of the C metric for other SLO attributes like latency, availability. A brief write up below:
Availability SLOs: Availability SLOs can be met either by software changes and redundant application design patterns or hardware redundancy. However, in the most common of scenarios, it is met by having hot standby and/or warm/cold standby infrastructure configurations. This directly impacts the “Embodied carbon” co-efficient in the above equation and hence tradeoffs have to be defined between meeting Availability SLO and allowed variance in SCI.
Latency SLOs:
Meeting latency SLOs involves either increasing the compute power allocated to the workload, spending developer cycles to fix performance issues, allocating the workload to synchronous services rather than async services that can run in energy efficient time sand also scaling the hardware required. Hence attempting to meet aggressive latency SLOs involves impacting all the co-efficient of the above equation: carbon efficiency, carbon aware and hardware efficiency.
Hence from a specification point of view, the SCI score can be integrated into the SLO examples as follows
Availability SLO example with SCI
• 90% of HTTP requests as reported by the load balancer succeeded in the last 30 day window and ensuring that the overall SCI does not go higher than x%
Latency SLO example with SCI
• 100% of requests in the last 5 minutes as measured at load balancer are served in less than 900ms and ensuring that the overall SCI does not go higher than x%
**How can we monitor SCI impact?**
Performance tests are great ways to measure SCI impact on SRE. Today they are used primarily to see if the application meets Service level objectives. We can add a couple of addition of performance tests (not a lot as that would mean transferring the SCI from prod environment to performance environments and cycles) to monitor for performance and adjusting the performance goal downwards (mostly!) to ensure SCI variances are not breached.
| requirement | site reliability engineering principles and impact of sci on them what is sli and slo reliability is the concept of a system having the quality of being trustworthy or of performing consistently well today most applications have a service level objective slo to facilitate monitoring slos have typically constraints service level indicator or sli the target metric or objective in percentile the observation window sli objective observation window slo slis have an event a success criterion and specify where and how you record success or failure it is specified as the proportion of events that were good example percentage of query requests that return an error code other than or a timeout measured on the query gateway frontend slis exist to help engineering teams make better decisions your slo performance is critical information to have when you’re making decisions about how hard and fast you can push your systems slos are also important data points for other engineers when they’re making assumptions about their dependencies on your service or system lastly your larger organization should use your slis and slos to make informed decisions about investment levels and about balancing reliability work against engineering velocity availability slo example • of http requests as reported by the load balancer succeeded in the last day window here http requests succeeded is the sli return error code other than or timeout is the objective and days is the observation window latency slo examples • of requests in the last minutes as measured at load balancer are served in less than • of requests in the last minutes as measured at load balancer are served in less than • of requests in the last minutes as measured at load balancer are served in less than how organizations can re define sre considering sci software carbon intensity is a relative score between applications to be more carbon efficient carbon aware and hardware efficient sci for applications ci energy used by the application located based marginal carbon emissions embodied carbon per baseline here the baseline is per api call per additional user per additional ml job etc from an sre principle and alignment then organizations would like to track carbon emissions from their applications using the sci score and raise alerts on breach i e if it increases beyond a certain percentage if an application has a sci score of x organizations would then track variance from this value and configure monitoring principles how would you then correctly define the metric as per sre principles in the above formula for sci baseline is a key aspect we will explain with an example where we are considering the baseline as one instance of a batch job the batch job is a piece of component within a larger software or application which could potentially be a web application workload with a batch job doing a long running business process that does not need user interactions let us assume that the sci value of an batch job running in west europe has a ci value of per instance of azure webjob by an initial assumption let us assume that the slo for sci has been defined as not more than variance if during the operating window of the job the service cranks up and the carbon intensity increases to then an alert has to be signaled however this is theoretical we have to look at this increase in the context of so many factors like interplay of sci increases with other slos like latency performance etc how the west europe datacenter was powered of coal renewables during the time of heightened operation of this web job inefficient threading and garbage collection practices that would have been in the code that surfaced during the peak operation etc when this incident as per sre principles this is an incident that should be monitored and alerted like a sev incident happens there could be multiple tuning techniques one tuning technique that comes to mind for this incident is to try moving the workload to a different datacenter that is better powered by renewables by calling watt time api or shifting the workload to a different time of the day these are techniques that need detailed and vetted data upfront for the orchestration algorithm to make dynamic decisions regarding moving the workload however we can tell today that we do not have defined and foolproof information to tell how much do each of these tuning techniques will contribute to managing the increased carbon intensity this data has to be collated and cross verified over a longer period of time going forward to come up with authentic deductions hence for the initial version of the specification i propose that we raise the level of abstraction for monitoring of sci at an application level rather than the individual component i e we will keep the baseline for software carbon intensity at the application level rather than a batch job ml job api call etc thus we can consider that the metric we will use for the site reliability engineer will be the total carbon emissions c value the formula for this metric is c o m where o e i scope of metric for this metric the definition of scope around which sci will operate is very important scope is the boundary area where we will apply the monitoring since we are talking about software the boundary here is the software boundary as defined in the sci specification however we may not be able to apply this uniformly for all software software varies by architecture environment hosting type dedicated infrastructure vs shared infrastructure vs serverless and the implementation of sre monitoring for sci varies by these factors we will discuss these factors below architecture of software different application architectures need monitoring differently consider for example the following workloads web based multi tier application or long running process deployed on either cloud or on premise mobile app connecting to backend apis on cloud or on premise desktop app connecting to backend apis on cloud or on premise ai based machine learning model experiments open source or closed source framework sdks server less applications from a pure monitoring perspective of the sre metric doing it on server based workloads in the above list may be the first step for e g web based multi tier applications have either virtual machines or connecting to apis and databases and hence we can monitor the operational emissions of these server components similarly we can calculate the metric for backend server apis and serverless components of the mobile and desktop apps there would be challenges however in doing the same for desktop devices and mobile platforms as the emissions calculations would need to know at a rough level the total number of mobile devices or desktops their types etc to calculate roughly the value for the desktop apps hence for the first release of specifications we can propose that monitoring will be for a subset of the above workloads i e for workloads which have majorly server components hosting infrastructure dedicated vs shared monitoring techniques will also vary based on the hosting mode for the software for those with dedicated infrastructure sci will just be the sum total of operational emission values across the different layers in the equation for sci sci per unit of baseline e i m the value of m does not make an impact when calculating delta carbon intensity current ci original ci since the hardware is exclusively reserved for the purpose of the said software hence the monitoring technique can potentially look for variances in the operational emissions value and variances of it to raise alert for the site reliability engineer the situation is different when we consider shared infrastructure servers multi tenant databases saas software shared by multiple customers here multiple micro services could share the same paas compute platforms and storage services which by design is carbon friendly in these cases the percentage of allocation of infrastructure is necessary information to be able to calculate the carbon intensity value for the specific customer software hence we need to include the embodied emissions m value in the monitoring metric application environment types the usage of the above sre metric also changes by environment this statement is tied to the application architecture factor somewhat but broadly the concept is that for the purpose of carbon tracking and monitoring measurement should be done for all environments like development testing qa performance and production this is because the carbon emission of the software increases manifold for lower environments like development and qa for workloads like machine learning models multiple iterations of running ai experiments in lower environments should be tracked for carbon emissions and hence the scope of the metric should be monitored at the environment level similarly for the other common workload scenarios like web or desktop applications multiple performance tests are executed to achieve the slo targets for throughput and or latency through the process of trying to achieve these targets the compute and storage resources are used more intensively than it would be on a production environment hence tracking of the metric is recommended at environment scope as well conclusion some of the deductions we have made at the end of this article total carbon emissions c is the metric we will monitor at an sre level the application of this metric can be done at multiple scope levels environment hosting infrastructure type and application architecture in future iterations of the specification work to be done to understand the sre impact of the c metric for other slo attributes like latency availability a brief write up below availability slos availability slos can be met either by software changes and redundant application design patterns or hardware redundancy however in the most common of scenarios it is met by having hot standby and or warm cold standby infrastructure configurations this directly impacts the “embodied carbon” co efficient in the above equation and hence tradeoffs have to be defined between meeting availability slo and allowed variance in sci latency slos meeting latency slos involves either increasing the compute power allocated to the workload spending developer cycles to fix performance issues allocating the workload to synchronous services rather than async services that can run in energy efficient time sand also scaling the hardware required hence attempting to meet aggressive latency slos involves impacting all the co efficient of the above equation carbon efficiency carbon aware and hardware efficiency hence from a specification point of view the sci score can be integrated into the slo examples as follows availability slo example with sci • of http requests as reported by the load balancer succeeded in the last day window and ensuring that the overall sci does not go higher than x latency slo example with sci • of requests in the last minutes as measured at load balancer are served in less than and ensuring that the overall sci does not go higher than x how can we monitor sci impact performance tests are great ways to measure sci impact on sre today they are used primarily to see if the application meets service level objectives we can add a couple of addition of performance tests not a lot as that would mean transferring the sci from prod environment to performance environments and cycles to monitor for performance and adjusting the performance goal downwards mostly to ensure sci variances are not breached | 1 |
13,328 | 22,638,297,474 | IssuesEvent | 2022-06-30 21:33:05 | NASA-PDS/pds-api | https://api.github.com/repos/NASA-PDS/pds-api | reopened | As a user, I want to only return the latest version of a product that has changed logical identifiers in it's history | requirement B12.1 B13.0 p.should-have sprint-backlog c.search-api | <!--
For more information on how to populate this new feature request, see the PDS Wiki on User Story Development:
https://github.com/NASA-PDS/nasa-pds.github.io/wiki/Issue-Tracking#user-story-development
-->
## 💪 Motivation
...so that I am not confused by seeing superseded data returned in search results
## 📖 Additional Details
<!-- Please prove any additional details or information that could help provide some context for the user story. -->
Per the [parent epic](https://github.com/nasa-pds/pds-registry-app/issues/219), and [the design](https://github.com/nasa-pds/pds-registry-app/issues/229) we need to update the API to implement these changes so we can sufficiently understand the version history.
See parent epic for more details.
## ⚖️ Acceptance Criteria
**Given** the context products [urn:nasa:pds:context:instrument:crs.vg1::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0_deprecated.xml) and [urn:nasa:pds:context:instrument:vg1.crs::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0.xml) ingested into the registry
**When I perform** a query of the API for `products/` and paginate through the results
**Then I expect** I should only see the product metadata for [urn:nasa:pds:context:instrument:vg1.crs::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0.xml) returned, not the superseded/deprecated [urn:nasa:pds:context:instrument:crs.vg1::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0_deprecated.xml)
**NOTE: This functionality should apply to all endpoints, not just the `products/` endpoints**
<!-- For Internal Dev Team Use -->
## ⚙️ Engineering Details
<!--
Provide some design / implementation details and/or a sub-task checklist as needed.
Convert issue to Epic if estimate is outside the scope of 1 sprint.
-->
| 1.0 | As a user, I want to only return the latest version of a product that has changed logical identifiers in it's history - <!--
For more information on how to populate this new feature request, see the PDS Wiki on User Story Development:
https://github.com/NASA-PDS/nasa-pds.github.io/wiki/Issue-Tracking#user-story-development
-->
## 💪 Motivation
...so that I am not confused by seeing superseded data returned in search results
## 📖 Additional Details
<!-- Please prove any additional details or information that could help provide some context for the user story. -->
Per the [parent epic](https://github.com/nasa-pds/pds-registry-app/issues/219), and [the design](https://github.com/nasa-pds/pds-registry-app/issues/229) we need to update the API to implement these changes so we can sufficiently understand the version history.
See parent epic for more details.
## ⚖️ Acceptance Criteria
**Given** the context products [urn:nasa:pds:context:instrument:crs.vg1::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0_deprecated.xml) and [urn:nasa:pds:context:instrument:vg1.crs::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0.xml) ingested into the registry
**When I perform** a query of the API for `products/` and paginate through the results
**Then I expect** I should only see the product metadata for [urn:nasa:pds:context:instrument:vg1.crs::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0.xml) returned, not the superseded/deprecated [urn:nasa:pds:context:instrument:crs.vg1::1.0](https://pds.nasa.gov/data/pds4/context-pds4/instrument/vg1.crs_1.0_deprecated.xml)
**NOTE: This functionality should apply to all endpoints, not just the `products/` endpoints**
<!-- For Internal Dev Team Use -->
## ⚙️ Engineering Details
<!--
Provide some design / implementation details and/or a sub-task checklist as needed.
Convert issue to Epic if estimate is outside the scope of 1 sprint.
-->
| requirement | as a user i want to only return the latest version of a product that has changed logical identifiers in it s history for more information on how to populate this new feature request see the pds wiki on user story development 💪 motivation so that i am not confused by seeing superseded data returned in search results 📖 additional details per the and we need to update the api to implement these changes so we can sufficiently understand the version history see parent epic for more details ⚖️ acceptance criteria given the context products and ingested into the registry when i perform a query of the api for products and paginate through the results then i expect i should only see the product metadata for returned not the superseded deprecated note this functionality should apply to all endpoints not just the products endpoints ⚙️ engineering details provide some design implementation details and or a sub task checklist as needed convert issue to epic if estimate is outside the scope of sprint | 1 |
113,129 | 24,371,073,245 | IssuesEvent | 2022-10-03 19:20:46 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Test failure JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd | arch-arm64 arch-x86 os-linux os-mac-os-x os-windows arch-x64 area-CodeGen-coreclr blocking-outerloop | Run: [runtime-coreclr outerloop 20220925.2](https://dev.azure.com/dnceng-public/public/_build/results?buildId=29432&view=ms.vss-test-web.build-test-results-tab&runId=592110&resultId=108659&paneView=debug)
Failed test:
```
R2R-CG2 windows arm64 Checked no_tiered_compilation @ Windows.10.Arm64v8.Open
- JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd
- JIT\\Regression\\JitBlue\\Runtime_33972\\Runtime_33972\\Runtime_33972.cmd
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 windows arm64 Checked @ Windows.10.Arm64v8.Open
- JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd
- JIT\\Regression\\JitBlue\\Runtime_33972\\Runtime_33972\\Runtime_33972.cmd
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 windows arm Checked no_tiered_compilation @ Windows.10.Arm64v8.Open
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 windows arm Checked @ Windows.10.Arm64v8.Open
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 windows x64 Checked no_tiered_compilation @ Windows.10.Amd64.Open
- JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 Linux arm64 Checked no_tiered_compilation @ (Ubuntu.1804.Arm64.Open)[email protected]/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm64v8-20220824230426-06f234f
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_33972/Runtime_33972/Runtime_33972.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 windows x86 Checked no_tiered_compilation @ Windows.10.Amd64.Open
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 Linux arm64 Checked no_tiered_compilation @ (Alpine.314.Arm64.Open)[email protected]/dotnet-buildtools/prereqs:alpine-3.14-helix-arm64v8-20210910135810-8a6f4f3
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_33972/Runtime_33972/Runtime_33972.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 windows x64 Checked @ Windows.10.Amd64.Open
- JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 Linux x64 Checked no_tiered_compilation @ (Alpine.314.Amd64.Open)[email protected]/dotnet-buildtools/prereqs:alpine-3.14-helix-amd64-20210910135833-1848e19
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 Linux x64 Checked @ (Alpine.314.Amd64.Open)[email protected]/dotnet-buildtools/prereqs:alpine-3.14-helix-amd64-20210910135833-1848e19
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 Linux arm64 Checked @ (Ubuntu.1804.Arm64.Open)[email protected]/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm64v8-20220824230426-06f234f
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_33972/Runtime_33972/Runtime_33972.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 Linux arm64 Checked @ (Alpine.314.Arm64.Open)[email protected]/dotnet-buildtools/prereqs:alpine-3.14-helix-arm64v8-20210910135810-8a6f4f3
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_33972/Runtime_33972/Runtime_33972.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 windows x86 Checked @ Windows.10.Amd64.Open
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 Linux x64 Checked no_tiered_compilation @ Ubuntu.1804.Amd64.Open
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 Linux x64 Checked @ Ubuntu.1804.Amd64.Open
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 OSX x64 Checked no_tiered_compilation @ OSX.1200.Amd64.Open
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 OSX x64 Checked @ OSX.1200.Amd64.Open
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
```
**Error message:**
```
FileCheck error: '__jit_disasm.out' is empty.
FileCheck command line: D:\h\w\AF3C093A\p\SuperFileCheck\runtimes/win-arm64\native\FileCheck.exe __tmp0_Runtime_34937.cs --allow-unused-prefixes --check-prefixes=CHECK,ARM64,ARM64-WINDOWS --dump-input-context 25 --input-file __jit_disasm.out
FileCheck error: '__jit_disasm.out' is empty.
FileCheck command line: D:\h\w\AF3C093A\p\SuperFileCheck\runtimes/win-arm64\native\FileCheck.exe __tmp1_Runtime_34937.cs --allow-unused-prefixes --check-prefixes=CHECK,ARM64,ARM64-WINDOWS --dump-input-context 25 --input-file __jit_disasm.out
FileCheck error: '__jit_disasm.out' is empty.
FileCheck command line: D:\h\w\AF3C093A\p\SuperFileCheck\runtimes/win-arm64\native\FileCheck.exe __tmp2_Runtime_34937.cs --allow-unused-prefixes --check-prefixes=CHECK,ARM64,ARM64-WINDOWS --dump-input-context 25 --input-file __jit_disasm.out
Return code: 1
Raw output file: D:\h\w\AF3C093A\w\BF5A0A64\uploads\Reports\JIT.Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.output.txt
Raw output:
BEGIN EXECUTION
Runtime_34937.dll
TestLibrary.dll
2 file(s) copied.
15:23:09.79
Response file: D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.dll.rsp
D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\IL-CG2\Runtime_34937.dll
-o:D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.dll
--targetarch:arm64
--verify-type-and-field-layout
--method-layout:random
-r:D:\h\w\AF3C093A\p\System..dll
-r:D:\h\w\AF3C093A\p\Microsoft..dll
-r:D:\h\w\AF3C093A\p\mscorlib.dll
-r:D:\h\w\AF3C093A\p\netstandard.dll
-O
" "dotnet" "D:\h\w\AF3C093A\p\crossgen2\crossgen2.dll" @"D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.dll.rsp" -r:D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\IL-CG2*.dll"
Emitting R2R PE file: D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.dll
15:23:13.56
15:23:13.57
Response file: D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\TestLibrary.dll.rsp
D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\IL-CG2\TestLibrary.dll
-o:D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\TestLibrary.dll
--targetarch:arm64
--verify-type-and-field-layout
--method-layout:random
-r:D:\h\w\AF3C093A\p\System..dll
-r:D:\h\w\AF3C093A\p\Microsoft..dll
-r:D:\h\w\AF3C093A\p\mscorlib.dll
-r:D:\h\w\AF3C093A\p\netstandard.dll
-O
" "dotnet" "D:\h\w\AF3C093A\p\crossgen2\crossgen2.dll" @"D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\TestLibrary.dll.rsp" -r:D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\IL-CG2*.dll"
Emitting R2R PE file: D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\TestLibrary.dll
15:23:17.99
"D:\h\w\AF3C093A\p\corerun.exe" -p "System.Reflection.Metadata.MetadataUpdater.IsSupported=false" Runtime_34937.dll
EXECUTION OF FILECHECK - FAILED 1
Test Harness Exitcode is : 1
To run the test:
set CORE_ROOT=D:\h\w\AF3C093A\p
D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.cmd
Expected: True
Actual: False
Stack trace
at JIT_Regression._JitBlue_Runtime_34937_Runtime_34937_Runtime_34937_._JitBlue_Runtime_34937_Runtime_34937_Runtime_34937_cmd()
at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor)
at System.Reflection.MethodInvoker.Invoke(Object obj, IntPtr* args, BindingFlags invokeAttr)
```
| 1.0 | Test failure JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd - Run: [runtime-coreclr outerloop 20220925.2](https://dev.azure.com/dnceng-public/public/_build/results?buildId=29432&view=ms.vss-test-web.build-test-results-tab&runId=592110&resultId=108659&paneView=debug)
Failed test:
```
R2R-CG2 windows arm64 Checked no_tiered_compilation @ Windows.10.Arm64v8.Open
- JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd
- JIT\\Regression\\JitBlue\\Runtime_33972\\Runtime_33972\\Runtime_33972.cmd
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 windows arm64 Checked @ Windows.10.Arm64v8.Open
- JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd
- JIT\\Regression\\JitBlue\\Runtime_33972\\Runtime_33972\\Runtime_33972.cmd
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 windows arm Checked no_tiered_compilation @ Windows.10.Arm64v8.Open
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 windows arm Checked @ Windows.10.Arm64v8.Open
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 windows x64 Checked no_tiered_compilation @ Windows.10.Amd64.Open
- JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 Linux arm64 Checked no_tiered_compilation @ (Ubuntu.1804.Arm64.Open)[email protected]/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm64v8-20220824230426-06f234f
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_33972/Runtime_33972/Runtime_33972.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 windows x86 Checked no_tiered_compilation @ Windows.10.Amd64.Open
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 Linux arm64 Checked no_tiered_compilation @ (Alpine.314.Arm64.Open)[email protected]/dotnet-buildtools/prereqs:alpine-3.14-helix-arm64v8-20210910135810-8a6f4f3
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_33972/Runtime_33972/Runtime_33972.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 windows x64 Checked @ Windows.10.Amd64.Open
- JIT\\Regression\\JitBlue\\Runtime_34937\\Runtime_34937\\Runtime_34937.cmd
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 Linux x64 Checked no_tiered_compilation @ (Alpine.314.Amd64.Open)[email protected]/dotnet-buildtools/prereqs:alpine-3.14-helix-amd64-20210910135833-1848e19
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 Linux x64 Checked @ (Alpine.314.Amd64.Open)[email protected]/dotnet-buildtools/prereqs:alpine-3.14-helix-amd64-20210910135833-1848e19
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 Linux arm64 Checked @ (Ubuntu.1804.Arm64.Open)[email protected]/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm64v8-20220824230426-06f234f
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_33972/Runtime_33972/Runtime_33972.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 Linux arm64 Checked @ (Alpine.314.Arm64.Open)[email protected]/dotnet-buildtools/prereqs:alpine-3.14-helix-arm64v8-20210910135810-8a6f4f3
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_33972/Runtime_33972/Runtime_33972.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 windows x86 Checked @ Windows.10.Amd64.Open
- JIT\\Regression\\JitBlue\\Runtime_73681\\Runtime_73681\\Runtime_73681.cmd
R2R-CG2 Linux x64 Checked no_tiered_compilation @ Ubuntu.1804.Amd64.Open
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 Linux x64 Checked @ Ubuntu.1804.Amd64.Open
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 OSX x64 Checked no_tiered_compilation @ OSX.1200.Amd64.Open
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
R2R-CG2 OSX x64 Checked @ OSX.1200.Amd64.Open
- JIT/Regression/JitBlue/Runtime_34937/Runtime_34937/Runtime_34937.sh
- JIT/Regression/JitBlue/Runtime_73681/Runtime_73681/Runtime_73681.sh
```
**Error message:**
```
FileCheck error: '__jit_disasm.out' is empty.
FileCheck command line: D:\h\w\AF3C093A\p\SuperFileCheck\runtimes/win-arm64\native\FileCheck.exe __tmp0_Runtime_34937.cs --allow-unused-prefixes --check-prefixes=CHECK,ARM64,ARM64-WINDOWS --dump-input-context 25 --input-file __jit_disasm.out
FileCheck error: '__jit_disasm.out' is empty.
FileCheck command line: D:\h\w\AF3C093A\p\SuperFileCheck\runtimes/win-arm64\native\FileCheck.exe __tmp1_Runtime_34937.cs --allow-unused-prefixes --check-prefixes=CHECK,ARM64,ARM64-WINDOWS --dump-input-context 25 --input-file __jit_disasm.out
FileCheck error: '__jit_disasm.out' is empty.
FileCheck command line: D:\h\w\AF3C093A\p\SuperFileCheck\runtimes/win-arm64\native\FileCheck.exe __tmp2_Runtime_34937.cs --allow-unused-prefixes --check-prefixes=CHECK,ARM64,ARM64-WINDOWS --dump-input-context 25 --input-file __jit_disasm.out
Return code: 1
Raw output file: D:\h\w\AF3C093A\w\BF5A0A64\uploads\Reports\JIT.Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.output.txt
Raw output:
BEGIN EXECUTION
Runtime_34937.dll
TestLibrary.dll
2 file(s) copied.
15:23:09.79
Response file: D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.dll.rsp
D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\IL-CG2\Runtime_34937.dll
-o:D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.dll
--targetarch:arm64
--verify-type-and-field-layout
--method-layout:random
-r:D:\h\w\AF3C093A\p\System..dll
-r:D:\h\w\AF3C093A\p\Microsoft..dll
-r:D:\h\w\AF3C093A\p\mscorlib.dll
-r:D:\h\w\AF3C093A\p\netstandard.dll
-O
" "dotnet" "D:\h\w\AF3C093A\p\crossgen2\crossgen2.dll" @"D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.dll.rsp" -r:D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\IL-CG2*.dll"
Emitting R2R PE file: D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.dll
15:23:13.56
15:23:13.57
Response file: D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\TestLibrary.dll.rsp
D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\IL-CG2\TestLibrary.dll
-o:D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\TestLibrary.dll
--targetarch:arm64
--verify-type-and-field-layout
--method-layout:random
-r:D:\h\w\AF3C093A\p\System..dll
-r:D:\h\w\AF3C093A\p\Microsoft..dll
-r:D:\h\w\AF3C093A\p\mscorlib.dll
-r:D:\h\w\AF3C093A\p\netstandard.dll
-O
" "dotnet" "D:\h\w\AF3C093A\p\crossgen2\crossgen2.dll" @"D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\TestLibrary.dll.rsp" -r:D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\IL-CG2*.dll"
Emitting R2R PE file: D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\TestLibrary.dll
15:23:17.99
"D:\h\w\AF3C093A\p\corerun.exe" -p "System.Reflection.Metadata.MetadataUpdater.IsSupported=false" Runtime_34937.dll
EXECUTION OF FILECHECK - FAILED 1
Test Harness Exitcode is : 1
To run the test:
set CORE_ROOT=D:\h\w\AF3C093A\p
D:\h\w\AF3C093A\w\BF5A0A64\e\JIT\Regression\JitBlue\Runtime_34937\Runtime_34937\Runtime_34937.cmd
Expected: True
Actual: False
Stack trace
at JIT_Regression._JitBlue_Runtime_34937_Runtime_34937_Runtime_34937_._JitBlue_Runtime_34937_Runtime_34937_Runtime_34937_cmd()
at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor)
at System.Reflection.MethodInvoker.Invoke(Object obj, IntPtr* args, BindingFlags invokeAttr)
```
| non_requirement | test failure jit regression jitblue runtime runtime runtime cmd run failed test windows checked no tiered compilation windows open jit regression jitblue runtime runtime runtime cmd jit regression jitblue runtime runtime runtime cmd jit regression jitblue runtime runtime runtime cmd windows checked windows open jit regression jitblue runtime runtime runtime cmd jit regression jitblue runtime runtime runtime cmd jit regression jitblue runtime runtime runtime cmd windows arm checked no tiered compilation windows open jit regression jitblue runtime runtime runtime cmd windows arm checked windows open jit regression jitblue runtime runtime runtime cmd windows checked no tiered compilation windows open jit regression jitblue runtime runtime runtime cmd jit regression jitblue runtime runtime runtime cmd linux checked no tiered compilation ubuntu open ubuntu armarch open mcr microsoft com dotnet buildtools prereqs ubuntu helix jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh windows checked no tiered compilation windows open jit regression jitblue runtime runtime runtime cmd linux checked no tiered compilation alpine open ubuntu armarch open mcr microsoft com dotnet buildtools prereqs alpine helix jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh windows checked windows open jit regression jitblue runtime runtime runtime cmd jit regression jitblue runtime runtime runtime cmd linux checked no tiered compilation alpine open ubuntu open mcr microsoft com dotnet buildtools prereqs alpine helix jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh linux checked alpine open ubuntu open mcr microsoft com dotnet buildtools prereqs alpine helix jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh linux checked ubuntu open ubuntu armarch open mcr microsoft com dotnet buildtools prereqs ubuntu helix jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh linux checked alpine open ubuntu armarch open mcr microsoft com dotnet buildtools prereqs alpine helix jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh windows checked windows open jit regression jitblue runtime runtime runtime cmd linux checked no tiered compilation ubuntu open jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh linux checked ubuntu open jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh osx checked no tiered compilation osx open jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh osx checked osx open jit regression jitblue runtime runtime runtime sh jit regression jitblue runtime runtime runtime sh error message filecheck error jit disasm out is empty filecheck command line d h w p superfilecheck runtimes win native filecheck exe runtime cs allow unused prefixes check prefixes check windows dump input context input file jit disasm out filecheck error jit disasm out is empty filecheck command line d h w p superfilecheck runtimes win native filecheck exe runtime cs allow unused prefixes check prefixes check windows dump input context input file jit disasm out filecheck error jit disasm out is empty filecheck command line d h w p superfilecheck runtimes win native filecheck exe runtime cs allow unused prefixes check prefixes check windows dump input context input file jit disasm out return code raw output file d h w w uploads reports jit regression jitblue runtime runtime runtime output txt raw output begin execution runtime dll testlibrary dll file s copied response file d h w w e jit regression jitblue runtime runtime runtime dll rsp d h w w e jit regression jitblue runtime runtime il runtime dll o d h w w e jit regression jitblue runtime runtime runtime dll targetarch verify type and field layout method layout random r d h w p system dll r d h w p microsoft dll r d h w p mscorlib dll r d h w p netstandard dll o dotnet d h w p dll d h w w e jit regression jitblue runtime runtime runtime dll rsp r d h w w e jit regression jitblue runtime runtime il dll emitting pe file d h w w e jit regression jitblue runtime runtime runtime dll response file d h w w e jit regression jitblue runtime runtime testlibrary dll rsp d h w w e jit regression jitblue runtime runtime il testlibrary dll o d h w w e jit regression jitblue runtime runtime testlibrary dll targetarch verify type and field layout method layout random r d h w p system dll r d h w p microsoft dll r d h w p mscorlib dll r d h w p netstandard dll o dotnet d h w p dll d h w w e jit regression jitblue runtime runtime testlibrary dll rsp r d h w w e jit regression jitblue runtime runtime il dll emitting pe file d h w w e jit regression jitblue runtime runtime testlibrary dll d h w p corerun exe p system reflection metadata metadataupdater issupported false runtime dll execution of filecheck failed test harness exitcode is to run the test set core root d h w p d h w w e jit regression jitblue runtime runtime runtime cmd expected true actual false stack trace at jit regression jitblue runtime runtime runtime jitblue runtime runtime runtime cmd at system runtimemethodhandle invokemethod object target void arguments signature sig boolean isconstructor at system reflection methodinvoker invoke object obj intptr args bindingflags invokeattr | 0 |
11,338 | 16,993,020,919 | IssuesEvent | 2021-07-01 00:13:57 | celo-org/wallet | https://api.github.com/repos/celo-org/wallet | closed | Add cEUR support to the list of balances in the sidebar menu | Priority: P2 TestQuality requirement wallet | [Design in Figma](https://www.figma.com/file/oLgp5ZQamOR1vHRWHMcOpv/Celo-Euro?node-id=1%3A3)
- [ ] Change the contract call;
- [ ] Display the balance. Logic to display the balance:
Nothing changes for users holding only cUSD or cUSD + CELO balance.
If the user holds both cUSD + cEUR balances, both of them will be displayed here. In this case, user will see 3 balances. If the user holds only cEUR balance, they will see only cEUR balance here.
- [ ] Call to fetch current exchange rate to fiat currency of choice.
| 1.0 | Add cEUR support to the list of balances in the sidebar menu - [Design in Figma](https://www.figma.com/file/oLgp5ZQamOR1vHRWHMcOpv/Celo-Euro?node-id=1%3A3)
- [ ] Change the contract call;
- [ ] Display the balance. Logic to display the balance:
Nothing changes for users holding only cUSD or cUSD + CELO balance.
If the user holds both cUSD + cEUR balances, both of them will be displayed here. In this case, user will see 3 balances. If the user holds only cEUR balance, they will see only cEUR balance here.
- [ ] Call to fetch current exchange rate to fiat currency of choice.
| requirement | add ceur support to the list of balances in the sidebar menu change the contract call display the balance logic to display the balance nothing changes for users holding only cusd or cusd celo balance if the user holds both cusd ceur balances both of them will be displayed here in this case user will see balances if the user holds only ceur balance they will see only ceur balance here call to fetch current exchange rate to fiat currency of choice | 1 |
72,435 | 15,225,941,491 | IssuesEvent | 2021-02-18 08:10:11 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [Security Solution] Getting error 404 while fetching details for exceptionlist | Feature:Rule Exceptions Team: SecuritySolution Team:Detections and Resp fixed impact:high | **Describe the bug**
Getting error 404 while fetching details for exceptionlist
**Build Details:**
```
Platform: Staging
Version: 7.11.0
Commit: f3abc08ac648f8b302733c5c22a39048314a027c
Build number: 37399
Artifact: https://staging.elastic.co/7.11.0-710164a0/summary-7.11.0.html
```
**Browser Details**
All
**Preconditions**
1. Cloud environment on staging should exist.
2. Endpoint should be deployed with Security Integration installed.
**Steps to Reproduce**
1. Navigate to Kibana URL on Browser.
2. Click on the "Detection" tab under Security from the left navigation bar.
3. Click on the 'Manage Detections rules' and select the 'Elastic Security' rule.
4. Navigate to the exception list and observe errors pop up
**Test data**
N/A
**Impacted Test case(s)**
N/A
**Actual Result**
Getting error 404 while fetching details for exception list
**Expected Result**
No error should pop up while fetching details for exception list
**What's Working**
N/A
**What's not Working**
N/A
**Screenshots**


**Logs**
 | True | [Security Solution] Getting error 404 while fetching details for exceptionlist - **Describe the bug**
Getting error 404 while fetching details for exceptionlist
**Build Details:**
```
Platform: Staging
Version: 7.11.0
Commit: f3abc08ac648f8b302733c5c22a39048314a027c
Build number: 37399
Artifact: https://staging.elastic.co/7.11.0-710164a0/summary-7.11.0.html
```
**Browser Details**
All
**Preconditions**
1. Cloud environment on staging should exist.
2. Endpoint should be deployed with Security Integration installed.
**Steps to Reproduce**
1. Navigate to Kibana URL on Browser.
2. Click on the "Detection" tab under Security from the left navigation bar.
3. Click on the 'Manage Detections rules' and select the 'Elastic Security' rule.
4. Navigate to the exception list and observe errors pop up
**Test data**
N/A
**Impacted Test case(s)**
N/A
**Actual Result**
Getting error 404 while fetching details for exception list
**Expected Result**
No error should pop up while fetching details for exception list
**What's Working**
N/A
**What's not Working**
N/A
**Screenshots**


**Logs**
 | non_requirement | getting error while fetching details for exceptionlist describe the bug getting error while fetching details for exceptionlist build details platform staging version commit build number artifact browser details all preconditions cloud environment on staging should exist endpoint should be deployed with security integration installed steps to reproduce navigate to kibana url on browser click on the detection tab under security from the left navigation bar click on the manage detections rules and select the elastic security rule navigate to the exception list and observe errors pop up test data n a impacted test case s n a actual result getting error while fetching details for exception list expected result no error should pop up while fetching details for exception list what s working n a what s not working n a screenshots logs | 0 |
3,695 | 6,147,545,918 | IssuesEvent | 2017-06-27 15:55:21 | cp317/Website | https://api.github.com/repos/cp317/Website | closed | Requirements document David Brown grammar feedback | Requirements | The terms within the definitions section should be listed in alphabetical order. The entire document should be in present tense. | 1.0 | Requirements document David Brown grammar feedback - The terms within the definitions section should be listed in alphabetical order. The entire document should be in present tense. | requirement | requirements document david brown grammar feedback the terms within the definitions section should be listed in alphabetical order the entire document should be in present tense | 1 |
42,874 | 11,095,471,275 | IssuesEvent | 2019-12-16 09:10:56 | oasislabs/oasis-core | https://api.github.com/repos/oasislabs/oasis-core | opened | build: Automatically audit go dependencies as part of the CI cycle | c:build c:security golang p:2 | We have `cargo audit` integrated into our workflow (#2154), we should do something similar for the Go dependencies as well. As far as I can tell, https://github.com/sonatype-nexus-community/nancy looks like it will serve a similar purpose and should be straight forward to integrate.
For what it's worth, as of master at filing the ticket the tool gives our dependencies a clean build of health like thus:
```
arnhem :: Documents/Development/nancy ‹master› % ./nancy -quiet ../oasislabs/repos/ekiden/go/go.sum
Audited dependencies: 204, Vulnerable: 0
arnhem :: Documents/Development/nancy ‹master› %
``` | 1.0 | build: Automatically audit go dependencies as part of the CI cycle - We have `cargo audit` integrated into our workflow (#2154), we should do something similar for the Go dependencies as well. As far as I can tell, https://github.com/sonatype-nexus-community/nancy looks like it will serve a similar purpose and should be straight forward to integrate.
For what it's worth, as of master at filing the ticket the tool gives our dependencies a clean build of health like thus:
```
arnhem :: Documents/Development/nancy ‹master› % ./nancy -quiet ../oasislabs/repos/ekiden/go/go.sum
Audited dependencies: 204, Vulnerable: 0
arnhem :: Documents/Development/nancy ‹master› %
``` | non_requirement | build automatically audit go dependencies as part of the ci cycle we have cargo audit integrated into our workflow we should do something similar for the go dependencies as well as far as i can tell looks like it will serve a similar purpose and should be straight forward to integrate for what it s worth as of master at filing the ticket the tool gives our dependencies a clean build of health like thus arnhem documents development nancy ‹master› nancy quiet oasislabs repos ekiden go go sum audited dependencies vulnerable arnhem documents development nancy ‹master› | 0 |
60,206 | 17,023,368,973 | IssuesEvent | 2021-07-03 01:39:55 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Merkaartor.pro broken in revision 13799 | Component: merkaartor Priority: trivial Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 10.17pm, Thursday, 19th February 2009]**
Index: Merkaartor.pro
===================================================================
--- Merkaartor.pro (revision 13799)
+++ Merkaartor.pro (working copy)
@@ -70,7 +70,7 @@
translations/merkaartor_ar.ts \
translations/merkaartor_cs.ts \
translations/merkaartor_de.ts \
- translations/merkaartor_es.ts
+ translations/merkaartor_es.ts \
translations/merkaartor_fr.ts \
translations/merkaartor_it.ts \
translations/merkaartor_pl.ts \
@@ -80,7 +80,7 @@
translations/merkaartor_ar.qm \
translations/merkaartor_cs.qm \
translations/merkaartor_de.qm \
- translations/merkaartor_es.ts
+ translations/merkaartor_es.ts \
translations/merkaartor_fr.qm \
translations/merkaartor_it.qm \
translations/merkaartor_pl.qm \
| 1.0 | Merkaartor.pro broken in revision 13799 - **[Submitted to the original trac issue database at 10.17pm, Thursday, 19th February 2009]**
Index: Merkaartor.pro
===================================================================
--- Merkaartor.pro (revision 13799)
+++ Merkaartor.pro (working copy)
@@ -70,7 +70,7 @@
translations/merkaartor_ar.ts \
translations/merkaartor_cs.ts \
translations/merkaartor_de.ts \
- translations/merkaartor_es.ts
+ translations/merkaartor_es.ts \
translations/merkaartor_fr.ts \
translations/merkaartor_it.ts \
translations/merkaartor_pl.ts \
@@ -80,7 +80,7 @@
translations/merkaartor_ar.qm \
translations/merkaartor_cs.qm \
translations/merkaartor_de.qm \
- translations/merkaartor_es.ts
+ translations/merkaartor_es.ts \
translations/merkaartor_fr.qm \
translations/merkaartor_it.qm \
translations/merkaartor_pl.qm \
| non_requirement | merkaartor pro broken in revision index merkaartor pro merkaartor pro revision merkaartor pro working copy translations merkaartor ar ts translations merkaartor cs ts translations merkaartor de ts translations merkaartor es ts translations merkaartor es ts translations merkaartor fr ts translations merkaartor it ts translations merkaartor pl ts translations merkaartor ar qm translations merkaartor cs qm translations merkaartor de qm translations merkaartor es ts translations merkaartor es ts translations merkaartor fr qm translations merkaartor it qm translations merkaartor pl qm | 0 |
5,314 | 7,837,321,471 | IssuesEvent | 2018-06-18 05:18:56 | stanczyk-203999/SimpleNotesApp | https://api.github.com/repos/stanczyk-203999/SimpleNotesApp | closed | Zaimplementować testy jednostkowe | Basic requirement | Zaimplementować testy z wykorzystaniem JUnit (najlepiej 5), AssertJ, Mockito. | 1.0 | Zaimplementować testy jednostkowe - Zaimplementować testy z wykorzystaniem JUnit (najlepiej 5), AssertJ, Mockito. | requirement | zaimplementować testy jednostkowe zaimplementować testy z wykorzystaniem junit najlepiej assertj mockito | 1 |
5,486 | 8,038,011,501 | IssuesEvent | 2018-07-30 14:18:28 | substance/texture | https://api.github.com/repos/substance/texture | closed | Additional fields for proceedings and articles in proceedings | requirements | > Moreover, for conference proceedings and articles in conference proceedings, you may want to consider also:
>
> - conference name (besides the proceeding name which equals the book title)
> - place of the event (besides the place of publication)
| 1.0 | Additional fields for proceedings and articles in proceedings - > Moreover, for conference proceedings and articles in conference proceedings, you may want to consider also:
>
> - conference name (besides the proceeding name which equals the book title)
> - place of the event (besides the place of publication)
| requirement | additional fields for proceedings and articles in proceedings moreover for conference proceedings and articles in conference proceedings you may want to consider also conference name besides the proceeding name which equals the book title place of the event besides the place of publication | 1 |
401,432 | 27,329,607,004 | IssuesEvent | 2023-02-25 13:01:25 | mdeslippe/easy-tracker | https://api.github.com/repos/mdeslippe/easy-tracker | closed | Initialize the Easy Tracker Migrations | documentation enhancement | - Create a `migrations` directory and create a [sqlx-cli](https://github.com/launchbadge/sqlx) project inside of it to manage database migrations.
- Make sure all migrations support both `up` and `down` operations.
- Create a README.md file for the `migrations` directory and provide a detailed explanation of the project, as well as conventions that should be followed.
- See if it is possible to build the application without running it. If it is possible, Create a GitHub Action to test building the application. | 1.0 | Initialize the Easy Tracker Migrations - - Create a `migrations` directory and create a [sqlx-cli](https://github.com/launchbadge/sqlx) project inside of it to manage database migrations.
- Make sure all migrations support both `up` and `down` operations.
- Create a README.md file for the `migrations` directory and provide a detailed explanation of the project, as well as conventions that should be followed.
- See if it is possible to build the application without running it. If it is possible, Create a GitHub Action to test building the application. | non_requirement | initialize the easy tracker migrations create a migrations directory and create a project inside of it to manage database migrations make sure all migrations support both up and down operations create a readme md file for the migrations directory and provide a detailed explanation of the project as well as conventions that should be followed see if it is possible to build the application without running it if it is possible create a github action to test building the application | 0 |
10,350 | 15,008,432,524 | IssuesEvent | 2021-01-31 10:04:20 | AndrewMiBoyd/wi21-cse110-lab3 | https://api.github.com/repos/AndrewMiBoyd/wi21-cse110-lab3 | opened | [REQ] Margins | requirement | - Long (margin-top, margin-bottom, margin-left, margin-right)
- Short (margin: top right bottom left)
- auto
| 1.0 | [REQ] Margins - - Long (margin-top, margin-bottom, margin-left, margin-right)
- Short (margin: top right bottom left)
- auto
| requirement | margins long margin top margin bottom margin left margin right short margin top right bottom left auto | 1 |
5,167 | 7,717,891,729 | IssuesEvent | 2018-05-23 14:51:09 | JoshRiley/DSMS | https://api.github.com/repos/JoshRiley/DSMS | opened | Treatments Database | requirement | Types of Treatment on offer
Band category
Link to another database with bands to get the price of the treatment | 1.0 | Treatments Database - Types of Treatment on offer
Band category
Link to another database with bands to get the price of the treatment | requirement | treatments database types of treatment on offer band category link to another database with bands to get the price of the treatment | 1 |
311,603 | 9,536,151,602 | IssuesEvent | 2019-04-30 09:00:06 | conan-io/docs | https://api.github.com/repos/conan-io/docs | closed | Document conans.model.Version | complex: medium priority: low stage: queue type: engineering | Document this class, as it will be referenced from `CMake.get_version()` (#838) | 1.0 | Document conans.model.Version - Document this class, as it will be referenced from `CMake.get_version()` (#838) | non_requirement | document conans model version document this class as it will be referenced from cmake get version | 0 |
3,580 | 6,026,868,192 | IssuesEvent | 2017-06-08 12:29:41 | gregstewart/alexa-gadgetzan-gazette | https://api.github.com/repos/gregstewart/alexa-gadgetzan-gazette | closed | Invocation name requirements for formatting | certification requirement | 1. Your skill does not meet our invocation name requirements for formatting. The invocation name must contain only lower-case alphabetic characters, spaces between words, possessive apostrophes (for example, “sam’s science trivia”), or periods used in abbreviations (for example, “a. b. c.”). Other characters like numbers must be spelled out. For example, “twenty one”.
Please correct the invocation name as suggested below:
blizzardnewsflash => blizzard news flash
Please review our documentation on choosing an invocation name and update your invocation name and example phrases accordingly. | 1.0 | Invocation name requirements for formatting - 1. Your skill does not meet our invocation name requirements for formatting. The invocation name must contain only lower-case alphabetic characters, spaces between words, possessive apostrophes (for example, “sam’s science trivia”), or periods used in abbreviations (for example, “a. b. c.”). Other characters like numbers must be spelled out. For example, “twenty one”.
Please correct the invocation name as suggested below:
blizzardnewsflash => blizzard news flash
Please review our documentation on choosing an invocation name and update your invocation name and example phrases accordingly. | requirement | invocation name requirements for formatting your skill does not meet our invocation name requirements for formatting the invocation name must contain only lower case alphabetic characters spaces between words possessive apostrophes for example “sam’s science trivia” or periods used in abbreviations for example “a b c ” other characters like numbers must be spelled out for example “twenty one” please correct the invocation name as suggested below blizzardnewsflash blizzard news flash please review our documentation on choosing an invocation name and update your invocation name and example phrases accordingly | 1 |
12,457 | 19,986,188,094 | IssuesEvent | 2022-01-30 17:49:04 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Not updating package | type:bug status:requirements priority-5-triage | ### How are you running Renovate?
WhiteSource Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### Please select which platform you are using if self-hosting.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
It used to work, and then stopped
### Describe the bug
Doesn't seem to be picking up a new version of my package
This is the package name that I updated about 10 hrs ago as of writing this
@aw-web-design/theme
And my repo components doesn't get updated with a pr.
### Relevant debug logs
I have looked at the logs and it doesn't pick up in the updates array for the package.
### Have you created a minimal reproduction repository?
No reproduction repository | 1.0 | Not updating package - ### How are you running Renovate?
WhiteSource Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### Please select which platform you are using if self-hosting.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
It used to work, and then stopped
### Describe the bug
Doesn't seem to be picking up a new version of my package
This is the package name that I updated about 10 hrs ago as of writing this
@aw-web-design/theme
And my repo components doesn't get updated with a pr.
### Relevant debug logs
I have looked at the logs and it doesn't pick up in the updates array for the package.
### Have you created a minimal reproduction repository?
No reproduction repository | requirement | not updating package how are you running renovate whitesource renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response please select which platform you are using if self hosting no response if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped it used to work and then stopped describe the bug doesn t seem to be picking up a new version of my package this is the package name that i updated about hrs ago as of writing this aw web design theme and my repo components doesn t get updated with a pr relevant debug logs i have looked at the logs and it doesn t pick up in the updates array for the package have you created a minimal reproduction repository no reproduction repository | 1 |
259 | 2,589,428,887 | IssuesEvent | 2015-02-18 12:37:49 | nextgis/ngm_clink_monitoring | https://api.github.com/repos/nextgis/ngm_clink_monitoring | opened | Очередной рефакторинг форм | High Priority Requirement | В связи с тем, что список линий может быть очень большим, необходимо разделить вторую форму ввода на две новые формы: форма выбора линии (в виде списка, аналогичного форме выбора объекта) и форма выбора объектов в линии (аналогичная той что есть сейчас, но без контрола выбора линии).
Общий вид перехода между формами теперь должен быть такой:
Стартовый экран <-> Выбор линии <-> Выбор объекта <-> Изменение статуса и работа с фото
В случае измерения линии:
Стартовый экран <-> Выбор линии <-> Изменение статуса и работа с фото
В заголовках форм необходимо вставить следующие надписи:
Compulink Monitoring <-> Выберите линию <-> Выберите объект <-> Укажите статус | 1.0 | Очередной рефакторинг форм - В связи с тем, что список линий может быть очень большим, необходимо разделить вторую форму ввода на две новые формы: форма выбора линии (в виде списка, аналогичного форме выбора объекта) и форма выбора объектов в линии (аналогичная той что есть сейчас, но без контрола выбора линии).
Общий вид перехода между формами теперь должен быть такой:
Стартовый экран <-> Выбор линии <-> Выбор объекта <-> Изменение статуса и работа с фото
В случае измерения линии:
Стартовый экран <-> Выбор линии <-> Изменение статуса и работа с фото
В заголовках форм необходимо вставить следующие надписи:
Compulink Monitoring <-> Выберите линию <-> Выберите объект <-> Укажите статус | requirement | очередной рефакторинг форм в связи с тем что список линий может быть очень большим необходимо разделить вторую форму ввода на две новые формы форма выбора линии в виде списка аналогичного форме выбора объекта и форма выбора объектов в линии аналогичная той что есть сейчас но без контрола выбора линии общий вид перехода между формами теперь должен быть такой стартовый экран выбор линии выбор объекта изменение статуса и работа с фото в случае измерения линии стартовый экран выбор линии изменение статуса и работа с фото в заголовках форм необходимо вставить следующие надписи compulink monitoring выберите линию выберите объект укажите статус | 1 |
12,957 | 21,556,494,831 | IssuesEvent | 2022-04-30 14:06:38 | G-Motivation/licensePlate | https://api.github.com/repos/G-Motivation/licensePlate | opened | Payment page | requirement | Hi all,
Please implement payment page, you need switch between with license plate recognition page smoothly.
1. Add UI for input license number.

2. It has three status:
Already pay
Not pay
Wrong license number
3. Build database to check driver is paid or not.
| 1.0 | Payment page - Hi all,
Please implement payment page, you need switch between with license plate recognition page smoothly.
1. Add UI for input license number.

2. It has three status:
Already pay
Not pay
Wrong license number
3. Build database to check driver is paid or not.
| requirement | payment page hi all please implement payment page you need switch between with license plate recognition page smoothly add ui for input license number it has three status already pay not pay wrong license number build database to check driver is paid or not | 1 |
24,225 | 17,020,365,232 | IssuesEvent | 2021-07-02 17:58:27 | tfreytag/WoPeD | https://api.github.com/repos/tfreytag/WoPeD | closed | SSL für Webservices | team: infrastructure | Aktuell werden die Webservices T2P und P2T noch ohne SSL (d. h. ohne https also ohne Verschlüsselung) betrieben. Dies soll sich in Zukunft ändern.
Die Webservices werden direkt über ihren Port angesprochen (z. B. http://woped.dhbw-karlsruhe.de:8081/t2p/, oder http://woped.dhbw-karlsruhe.de:8082/p2t/). Das stellt keine optimale Lösung dar, da zu viele Informationen über den Server nach außen preisgegeben werden (niemanden hat zu interessieren, auf welchem Port der Service betrieben wird).
Auf dem Server werden aktuell folgenden Services betrachtet:
- der alte P2T- und T2P-Webservice mit Wildfly
- der neue P2T-Webservice in einem Docker-Container
- der neue T2P-Webservice in einem Docker-Container
Daraus ergeben sich folgende Aufgaben
- [ ] Falls noch nicht vorhanden, Wissen zum Umgang mit der Kommandozeile unter Linux/Ubuntu und zu Docker aneignen (denn wir verwenden ein headless Ubuntu und betreiben die neuen Webservices in Docker Containern)
- [ ] Evaluieren, ob die neuen Webservices weiterhin über ihren Port angesprochen werden sollen, oder nur über die URL (z. B. Vorschlag: https://woped.dhbw-karlsruhe.de/webservices/t2p) - Stichwort Reverse-Proxy
- [ ] Klären, ob auch für die alten Webservices SSL unterstützt werden soll.
- [ ] Herausfinden, wie und wo ein SSL Zertifikat am besten hinterlegt wird, um für die beiden neuen Webservices SSL zu unterstützen. Falls die Webservices nicht mehr über ihre Ports angesprochen werden sollen, auch die Verwendung eines Reverse-Proxy evaluieren (z. B. `nginx`).
- [ ] SSL Unterstützung implementieren. Falls die Webservices nicht mehr über ihre Ports angesprochen werden sollen, auch die Anpassungen im WopeD Client vornehmen (Konfigurationseinstellungen zur Verbindung mit dem Webservices) | 1.0 | SSL für Webservices - Aktuell werden die Webservices T2P und P2T noch ohne SSL (d. h. ohne https also ohne Verschlüsselung) betrieben. Dies soll sich in Zukunft ändern.
Die Webservices werden direkt über ihren Port angesprochen (z. B. http://woped.dhbw-karlsruhe.de:8081/t2p/, oder http://woped.dhbw-karlsruhe.de:8082/p2t/). Das stellt keine optimale Lösung dar, da zu viele Informationen über den Server nach außen preisgegeben werden (niemanden hat zu interessieren, auf welchem Port der Service betrieben wird).
Auf dem Server werden aktuell folgenden Services betrachtet:
- der alte P2T- und T2P-Webservice mit Wildfly
- der neue P2T-Webservice in einem Docker-Container
- der neue T2P-Webservice in einem Docker-Container
Daraus ergeben sich folgende Aufgaben
- [ ] Falls noch nicht vorhanden, Wissen zum Umgang mit der Kommandozeile unter Linux/Ubuntu und zu Docker aneignen (denn wir verwenden ein headless Ubuntu und betreiben die neuen Webservices in Docker Containern)
- [ ] Evaluieren, ob die neuen Webservices weiterhin über ihren Port angesprochen werden sollen, oder nur über die URL (z. B. Vorschlag: https://woped.dhbw-karlsruhe.de/webservices/t2p) - Stichwort Reverse-Proxy
- [ ] Klären, ob auch für die alten Webservices SSL unterstützt werden soll.
- [ ] Herausfinden, wie und wo ein SSL Zertifikat am besten hinterlegt wird, um für die beiden neuen Webservices SSL zu unterstützen. Falls die Webservices nicht mehr über ihre Ports angesprochen werden sollen, auch die Verwendung eines Reverse-Proxy evaluieren (z. B. `nginx`).
- [ ] SSL Unterstützung implementieren. Falls die Webservices nicht mehr über ihre Ports angesprochen werden sollen, auch die Anpassungen im WopeD Client vornehmen (Konfigurationseinstellungen zur Verbindung mit dem Webservices) | non_requirement | ssl für webservices aktuell werden die webservices und noch ohne ssl d h ohne https also ohne verschlüsselung betrieben dies soll sich in zukunft ändern die webservices werden direkt über ihren port angesprochen z b oder das stellt keine optimale lösung dar da zu viele informationen über den server nach außen preisgegeben werden niemanden hat zu interessieren auf welchem port der service betrieben wird auf dem server werden aktuell folgenden services betrachtet der alte und webservice mit wildfly der neue webservice in einem docker container der neue webservice in einem docker container daraus ergeben sich folgende aufgaben falls noch nicht vorhanden wissen zum umgang mit der kommandozeile unter linux ubuntu und zu docker aneignen denn wir verwenden ein headless ubuntu und betreiben die neuen webservices in docker containern evaluieren ob die neuen webservices weiterhin über ihren port angesprochen werden sollen oder nur über die url z b vorschlag stichwort reverse proxy klären ob auch für die alten webservices ssl unterstützt werden soll herausfinden wie und wo ein ssl zertifikat am besten hinterlegt wird um für die beiden neuen webservices ssl zu unterstützen falls die webservices nicht mehr über ihre ports angesprochen werden sollen auch die verwendung eines reverse proxy evaluieren z b nginx ssl unterstützung implementieren falls die webservices nicht mehr über ihre ports angesprochen werden sollen auch die anpassungen im woped client vornehmen konfigurationseinstellungen zur verbindung mit dem webservices | 0 |
12,583 | 20,324,751,912 | IssuesEvent | 2022-02-18 03:57:46 | NASA-PDS/harvest | https://api.github.com/repos/NASA-PDS/harvest | opened | As a user, I want to be able to see a summary of all logs messages after harvest execution completes | requirement p.must-have icebox | <!--
For more information on how to populate this new feature request, see the PDS Wiki on User Story Development:
https://github.com/NASA-PDS/nasa-pds.github.io/wiki/Issue-Tracking#user-story-development
-->
## 💪 Motivation
...so that I can easily tell if products failed, if there are warnings I should check out, etc.
## 📖 Additional Details
<!-- Please prove any additional details or information that could help provide some context for the user story. -->
## ⚖️ Acceptance Criteria
**Given** a bundle with a product label that contains something that will cause an error in harvest
**When I perform** a harvest execution on that bundle that includes that product
**Then I expect** to see a part in the summary that includes a number of errors (and warnings) that occurred during the execution.
<!-- For Internal Dev Team Use -->
## ⚙️ Engineering Details
<!--
Provide some design / implementation details and/or a sub-task checklist as needed.
Convert issue to Epic if estimate is outside the scope of 1 sprint.
-->
| 1.0 | As a user, I want to be able to see a summary of all logs messages after harvest execution completes - <!--
For more information on how to populate this new feature request, see the PDS Wiki on User Story Development:
https://github.com/NASA-PDS/nasa-pds.github.io/wiki/Issue-Tracking#user-story-development
-->
## 💪 Motivation
...so that I can easily tell if products failed, if there are warnings I should check out, etc.
## 📖 Additional Details
<!-- Please prove any additional details or information that could help provide some context for the user story. -->
## ⚖️ Acceptance Criteria
**Given** a bundle with a product label that contains something that will cause an error in harvest
**When I perform** a harvest execution on that bundle that includes that product
**Then I expect** to see a part in the summary that includes a number of errors (and warnings) that occurred during the execution.
<!-- For Internal Dev Team Use -->
## ⚙️ Engineering Details
<!--
Provide some design / implementation details and/or a sub-task checklist as needed.
Convert issue to Epic if estimate is outside the scope of 1 sprint.
-->
| requirement | as a user i want to be able to see a summary of all logs messages after harvest execution completes for more information on how to populate this new feature request see the pds wiki on user story development 💪 motivation so that i can easily tell if products failed if there are warnings i should check out etc 📖 additional details ⚖️ acceptance criteria given a bundle with a product label that contains something that will cause an error in harvest when i perform a harvest execution on that bundle that includes that product then i expect to see a part in the summary that includes a number of errors and warnings that occurred during the execution ⚙️ engineering details provide some design implementation details and or a sub task checklist as needed convert issue to epic if estimate is outside the scope of sprint | 1 |
9,011 | 12,518,834,458 | IssuesEvent | 2020-06-03 13:34:20 | GruppOne/stalker-web-app | https://api.github.com/repos/GruppOne/stalker-web-app | closed | R097F1 | requirement | È necessario che l'amministratore confermi di voler procedere con l'eliminazione dell'account selezionato | 1.0 | R097F1 - È necessario che l'amministratore confermi di voler procedere con l'eliminazione dell'account selezionato | requirement | è necessario che l amministratore confermi di voler procedere con l eliminazione dell account selezionato | 1 |
333,465 | 10,126,670,740 | IssuesEvent | 2019-08-01 08:29:07 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | bandcamp.com - site is not usable | browser-firefox-mobile engine-gecko priority-important | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://bandcamp.com/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: login doesn't work
**Steps to Reproduce**:
Login on mobile or desktop version
[](https://webcompat.com/uploads/2019/8/daf6b146-5c76-4af6-9dfe-fb7f84771832.jpeg)
[](https://webcompat.com/uploads/2019/8/737a140d-87ab-4c5a-b2c7-07144df07e40.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190715033856</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: default</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[console.log([object Object]) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:60:11]', u'[console.info( [Tr]->[CF] Disabled Canvas Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:1491:33]', u'[console.info( [Tr]->[HW] Modified hardware information.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:1095:33]', u'[console.info( [Tr]->[CR] Disabled getClientRects Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:653:33]', u'[console.info( [Tr]->[AF] Using smart Audio Fingerprinting Protection) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:481:33]', u'[console.info( [Tr]->[NP] Disabled Plugin Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:774:33]', u'[console.info( [Tr]->[RJ] Disabled Referer Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:552:33]', u'[console.info( [Tr]->[SB] Disabled Ping Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:541:33]', u'[console.info( [Tr]->[BA] Disabled Battery API Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:515:33]', u'[console.info( [Tr]->[GL] Modified WebGL Information.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:1295:33]', u'[console.log(09:17:13.214:, ErrorCollector: enabled) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.218:, Capabilities: registered test hasSVG; classname=has-svg) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.221:, Capabilities: registered test hasCSSOM; classname=has-cssom) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.224:, Capabilities: registered test hasHover; classname=no-hover) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.226:, Capabilities: registered test hasTouch; classname=has-touch) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.227:, Capabilities: registered test hasAnimation; classname=has-anim) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.837:, Control: got DOMReady) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.947:, got param launch_edit_design=undefined from url query) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.949:, hiding params [launch_edit_design] from url query) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.info(09:17:14.162:, Cookie comm channel fan_verification started listening.) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:14.170:, FanControls.getData: data is not yet present) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log( [TracePage]Page loaded ) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/loaded.js:19:11]', u'[console.log(09:17:15.012:, bcweekly: rendering tracklist) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:15.635:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.596:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.598:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.730:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.797:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.799:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.846:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.849:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.852:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.854:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.857:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]']
</pre>
</details>
Submitted in the name of `@Nex8192`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | bandcamp.com - site is not usable - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://bandcamp.com/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: login doesn't work
**Steps to Reproduce**:
Login on mobile or desktop version
[](https://webcompat.com/uploads/2019/8/daf6b146-5c76-4af6-9dfe-fb7f84771832.jpeg)
[](https://webcompat.com/uploads/2019/8/737a140d-87ab-4c5a-b2c7-07144df07e40.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190715033856</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: default</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[console.log([object Object]) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:60:11]', u'[console.info( [Tr]->[CF] Disabled Canvas Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:1491:33]', u'[console.info( [Tr]->[HW] Modified hardware information.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:1095:33]', u'[console.info( [Tr]->[CR] Disabled getClientRects Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:653:33]', u'[console.info( [Tr]->[AF] Using smart Audio Fingerprinting Protection) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:481:33]', u'[console.info( [Tr]->[NP] Disabled Plugin Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:774:33]', u'[console.info( [Tr]->[RJ] Disabled Referer Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:552:33]', u'[console.info( [Tr]->[SB] Disabled Ping Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:541:33]', u'[console.info( [Tr]->[BA] Disabled Battery API Tracking.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:515:33]', u'[console.info( [Tr]->[GL] Modified WebGL Information.) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/page.js:1295:33]', u'[console.log(09:17:13.214:, ErrorCollector: enabled) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.218:, Capabilities: registered test hasSVG; classname=has-svg) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.221:, Capabilities: registered test hasCSSOM; classname=has-cssom) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.224:, Capabilities: registered test hasHover; classname=no-hover) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.226:, Capabilities: registered test hasTouch; classname=has-touch) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.227:, Capabilities: registered test hasAnimation; classname=has-anim) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.837:, Control: got DOMReady) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.947:, got param launch_edit_design=undefined from url query) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:13.949:, hiding params [launch_edit_design] from url query) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.info(09:17:14.162:, Cookie comm channel fan_verification started listening.) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:14.170:, FanControls.getData: data is not yet present) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log( [TracePage]Page loaded ) moz-extension://30e67465-12d6-406e-ab5d-6c57ce5e894b/js/contentscript/loaded.js:19:11]', u'[console.log(09:17:15.012:, bcweekly: rendering tracklist) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:15.635:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.596:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.598:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.730:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.797:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.799:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.846:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.849:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.852:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.854:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]', u'[console.log(09:17:20.857:, FanControls.initializeItems) https://s4.bcbits.com/tmpdata/cache/global_head.fr_bundle_min_d242b12a15baed03753beb9086b48f58.js:343:43]']
</pre>
</details>
Submitted in the name of `@Nex8192`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_requirement | bandcamp com site is not usable url browser version firefox mobile operating system android tested another browser no problem type site is not usable description login doesn t work steps to reproduce login on mobile or desktop version browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel default console messages moz extension js contentscript page js u disabled canvas tracking moz extension js contentscript page js u modified hardware information moz extension js contentscript page js u disabled getclientrects tracking moz extension js contentscript page js u using smart audio fingerprinting protection moz extension js contentscript page js u disabled plugin tracking moz extension js contentscript page js u disabled referer tracking moz extension js contentscript page js u disabled ping tracking moz extension js contentscript page js u disabled battery api tracking moz extension js contentscript page js u modified webgl information moz extension js contentscript page js u u u u u u u u u from url query u u u page loaded moz extension js contentscript loaded js u u u u u u u u u u u u submitted in the name of from with ❤️ | 0 |
54,504 | 6,393,284,018 | IssuesEvent | 2017-08-04 06:53:55 | frappe/erpnext | https://api.github.com/repos/frappe/erpnext | closed | Tests related to Calculations | testing | - [x] Multi currency
- [x] Multi UOM
- [x] Discount on item
- [x] Discount on total
- [x] Tax
- [x] Item wise Tax
- [x] Change Price List
- [x] Shipping Rule
- [x] Serialized Item
- [x] Batched Item | 1.0 | Tests related to Calculations - - [x] Multi currency
- [x] Multi UOM
- [x] Discount on item
- [x] Discount on total
- [x] Tax
- [x] Item wise Tax
- [x] Change Price List
- [x] Shipping Rule
- [x] Serialized Item
- [x] Batched Item | non_requirement | tests related to calculations multi currency multi uom discount on item discount on total tax item wise tax change price list shipping rule serialized item batched item | 0 |
9,368 | 13,231,781,978 | IssuesEvent | 2020-08-18 12:18:03 | ianyong/ip | https://api.github.com/repos/ianyong/ip | closed | A-TextUiTesting: Automated Text UI Testing | cs2103t-requirement enhancement | - **A-TextUiTesting**: Test using the I/O redirection technique
- Use the input/output redirection technique to semi-automate the testing of Duke.
Notes:
- A tutorial of this technique is [here](https://se-education.org/guides/tutorials/textUiTesting.html).
- The required scripts are provided in the Duke repo (see the `text-ui-test` folder).
Refer to https://nus-cs2103-ay2021s1.github.io/website/schedule/week2/project.html#a-textuitesting. | 1.0 | A-TextUiTesting: Automated Text UI Testing - - **A-TextUiTesting**: Test using the I/O redirection technique
- Use the input/output redirection technique to semi-automate the testing of Duke.
Notes:
- A tutorial of this technique is [here](https://se-education.org/guides/tutorials/textUiTesting.html).
- The required scripts are provided in the Duke repo (see the `text-ui-test` folder).
Refer to https://nus-cs2103-ay2021s1.github.io/website/schedule/week2/project.html#a-textuitesting. | requirement | a textuitesting automated text ui testing a textuitesting test using the i o redirection technique use the input output redirection technique to semi automate the testing of duke notes a tutorial of this technique is the required scripts are provided in the duke repo see the text ui test folder refer to | 1 |
45,065 | 9,669,108,265 | IssuesEvent | 2019-05-21 16:32:37 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | opened | Reevaluate view model for collection creation flow | Feature:Collections 🤒 code health | [In this PR](https://github.com/mozilla-mobile/fenix/pull/2652/files), @boek and I decided to land with the given implementation, but to come back and reevaluate how we handle the view model when transitioning between the fragments, and passing data to the many adapters used in the complicated view. | 1.0 | Reevaluate view model for collection creation flow - [In this PR](https://github.com/mozilla-mobile/fenix/pull/2652/files), @boek and I decided to land with the given implementation, but to come back and reevaluate how we handle the view model when transitioning between the fragments, and passing data to the many adapters used in the complicated view. | non_requirement | reevaluate view model for collection creation flow boek and i decided to land with the given implementation but to come back and reevaluate how we handle the view model when transitioning between the fragments and passing data to the many adapters used in the complicated view | 0 |
2,697 | 5,040,206,143 | IssuesEvent | 2016-12-19 03:41:26 | fkucuk/Fall2016Swe573 | https://api.github.com/repos/fkucuk/Fall2016Swe573 | closed | Implement ActivityResources REST class | development task requirement implementation REST resource implementation | Detailed information and endpoints' description are available in Design Spesifications Document. | 1.0 | Implement ActivityResources REST class - Detailed information and endpoints' description are available in Design Spesifications Document. | requirement | implement activityresources rest class detailed information and endpoints description are available in design spesifications document | 1 |
31,503 | 8,705,818,896 | IssuesEvent | 2018-12-05 23:52:13 | hashicorp/packer | https://api.github.com/repos/hashicorp/packer | closed | Delegate AWS region error checking to AWS Go SDK | builder/amazon enhancement | The error checking that occurs on a user-passed AWS region should be removed and delegated to the AWS Go SDK to future proof `packer` for the introduction of new regions/endpoints. The Go SDK has a much more comprehensive description of which regions and endpoints are valid, and since Amazon supports it, it's more likely to more quickly match their service offerings as those offerings change.
| 1.0 | Delegate AWS region error checking to AWS Go SDK - The error checking that occurs on a user-passed AWS region should be removed and delegated to the AWS Go SDK to future proof `packer` for the introduction of new regions/endpoints. The Go SDK has a much more comprehensive description of which regions and endpoints are valid, and since Amazon supports it, it's more likely to more quickly match their service offerings as those offerings change.
| non_requirement | delegate aws region error checking to aws go sdk the error checking that occurs on a user passed aws region should be removed and delegated to the aws go sdk to future proof packer for the introduction of new regions endpoints the go sdk has a much more comprehensive description of which regions and endpoints are valid and since amazon supports it it s more likely to more quickly match their service offerings as those offerings change | 0 |
12,024 | 18,779,534,534 | IssuesEvent | 2021-11-08 03:38:15 | goharbor/harbor | https://api.github.com/repos/goharbor/harbor | closed | "GC NOW" button should reconfirm! | area/ui area/gc kind/requirement | **Is your feature request related to a problem? Please describe.**
As we all know, In the process of garbage collection, we can not push image to harbor. If I click the button "GC NOW", I can not stop the gc task actually. In the production environment, the gc task often takes a long time. And on the web UI, when I click the "GC NOW" button, it begins to GC directly with no reconfirm! So if I click the button by mistake or other things happen, it will cause serious consequences.

**Describe the solution you'd like**
I think when user click the "GC NOW" button, it should **pops up a prompt box** with the warning message, the user will have to click "OK" to reconfirm this dangerous operation or "Cancel" to cancel this operation.
| 1.0 | "GC NOW" button should reconfirm! - **Is your feature request related to a problem? Please describe.**
As we all know, In the process of garbage collection, we can not push image to harbor. If I click the button "GC NOW", I can not stop the gc task actually. In the production environment, the gc task often takes a long time. And on the web UI, when I click the "GC NOW" button, it begins to GC directly with no reconfirm! So if I click the button by mistake or other things happen, it will cause serious consequences.

**Describe the solution you'd like**
I think when user click the "GC NOW" button, it should **pops up a prompt box** with the warning message, the user will have to click "OK" to reconfirm this dangerous operation or "Cancel" to cancel this operation.
| requirement | gc now button should reconfirm! is your feature request related to a problem please describe as we all know in the process of garbage collection we can not push image to harbor if i click the button gc now i can not stop the gc task actually in the production environment the gc task often takes a long time and on the web ui when i click the gc now button it begins to gc directly with no reconfirm so if i click the button by mistake or other things happen it will cause serious consequences describe the solution you d like i think when user click the gc now button it should pops up a prompt box with the warning message the user will have to click ok to reconfirm this dangerous operation or cancel to cancel this operation | 1 |
127,125 | 26,987,273,661 | IssuesEvent | 2023-02-09 17:01:05 | cosmos/cosmos-sdk | https://api.github.com/repos/cosmos/cosmos-sdk | opened | Fix linting issues | help wanted good first issue Type: Code Hygiene | The SDK uses [golangci-lint](https://github.com/golangci/golangci-lint) for it's linting.
Recently, due to the bump of the minimum version to Go 1.20, the golangci-lint version has been bumped to a version that supports Go 1.20. Doing that surfaced a few linting issues that should be addressed.
To surface them and check which ones are left, run `make lint-fix`. | 1.0 | Fix linting issues - The SDK uses [golangci-lint](https://github.com/golangci/golangci-lint) for it's linting.
Recently, due to the bump of the minimum version to Go 1.20, the golangci-lint version has been bumped to a version that supports Go 1.20. Doing that surfaced a few linting issues that should be addressed.
To surface them and check which ones are left, run `make lint-fix`. | non_requirement | fix linting issues the sdk uses for it s linting recently due to the bump of the minimum version to go the golangci lint version has been bumped to a version that supports go doing that surfaced a few linting issues that should be addressed to surface them and check which ones are left run make lint fix | 0 |
10,492 | 15,229,320,151 | IssuesEvent | 2021-02-18 12:45:03 | sul-dlss/happy-heron | https://api.github.com/repos/sul-dlss/happy-heron | closed | Manager should be able to see Approvals table on Dashboard | PO beta requirement bug | This used to work, but now I am not seeing the items I have pending approval.
Top of the dashboard does not have the Approvals table:

But I do have items in collections I own that are awaiting approval:
 | 1.0 | Manager should be able to see Approvals table on Dashboard - This used to work, but now I am not seeing the items I have pending approval.
Top of the dashboard does not have the Approvals table:

But I do have items in collections I own that are awaiting approval:
 | requirement | manager should be able to see approvals table on dashboard this used to work but now i am not seeing the items i have pending approval top of the dashboard does not have the approvals table but i do have items in collections i own that are awaiting approval | 1 |
51,774 | 21,844,901,754 | IssuesEvent | 2022-05-18 03:03:27 | quocthinhvo/status | https://api.github.com/repos/quocthinhvo/status | opened | 🛑 SSH Service is down | status ssh-service | In [`e4667bb`](https://github.com/quocthinhvo/status/commit/e4667bb4689503548c259f88dadb4dc574617db1
), SSH Service ($URL_BASE) was **down**:
- HTTP code: 0
- Response time: 0 ms
| 1.0 | 🛑 SSH Service is down - In [`e4667bb`](https://github.com/quocthinhvo/status/commit/e4667bb4689503548c259f88dadb4dc574617db1
), SSH Service ($URL_BASE) was **down**:
- HTTP code: 0
- Response time: 0 ms
| non_requirement | 🛑 ssh service is down in ssh service url base was down http code response time ms | 0 |
22,113 | 18,720,498,737 | IssuesEvent | 2021-11-03 11:12:39 | ethersphere/swarm-cli | https://api.github.com/repos/ethersphere/swarm-cli | closed | Curl output could be displayed with different color | enhancement issue usability | Currently the curl commands printed use the same color as the default text color. It would be good to choose a different color so that it stands out from the general output. | True | Curl output could be displayed with different color - Currently the curl commands printed use the same color as the default text color. It would be good to choose a different color so that it stands out from the general output. | non_requirement | curl output could be displayed with different color currently the curl commands printed use the same color as the default text color it would be good to choose a different color so that it stands out from the general output | 0 |
7,361 | 10,660,717,955 | IssuesEvent | 2019-10-18 10:33:00 | ValerioLucandri/WorkineThor | https://api.github.com/repos/ValerioLucandri/WorkineThor | opened | Read file | Functional Requirements | The system shall allow all members of the group to read the files uploaded to the job folder | 1.0 | Read file - The system shall allow all members of the group to read the files uploaded to the job folder | requirement | read file the system shall allow all members of the group to read the files uploaded to the job folder | 1 |
83,127 | 10,325,460,458 | IssuesEvent | 2019-09-01 17:31:34 | mtgred/netrunner | https://api.github.com/repos/mtgred/netrunner | closed | Undo click with Patchwork | bug help wanted who-designed-this-game-anyway | when doing /undo-click everything was reverted back, but the card I used to feed to Patchwork, it was still in the heap, so undo click is not undoing the click. | 1.0 | Undo click with Patchwork - when doing /undo-click everything was reverted back, but the card I used to feed to Patchwork, it was still in the heap, so undo click is not undoing the click. | non_requirement | undo click with patchwork when doing undo click everything was reverted back but the card i used to feed to patchwork it was still in the heap so undo click is not undoing the click | 0 |
12,752 | 20,867,167,832 | IssuesEvent | 2022-03-22 08:31:07 | ktrmb/travelDiary | https://api.github.com/repos/ktrmb/travelDiary | opened | TAG-03 – Bilder hinzufügen, ändern und löschen | user requirement | Zu jedem Tagebucheintrag können bis zu 3 Bilder hinzugefügt werden. Bilder sollen dabei beim Eintrag in einer (kleinen) Vorschau dargestellt werden, sollen aber auch vergrößert dargestellt werden können. Bilder eines Tagebucheintrages können jederzeit ergänzt (bis max. 3), ersetzt oder gelöscht werden.
| 1.0 | TAG-03 – Bilder hinzufügen, ändern und löschen - Zu jedem Tagebucheintrag können bis zu 3 Bilder hinzugefügt werden. Bilder sollen dabei beim Eintrag in einer (kleinen) Vorschau dargestellt werden, sollen aber auch vergrößert dargestellt werden können. Bilder eines Tagebucheintrages können jederzeit ergänzt (bis max. 3), ersetzt oder gelöscht werden.
| requirement | tag – bilder hinzufügen ändern und löschen zu jedem tagebucheintrag können bis zu bilder hinzugefügt werden bilder sollen dabei beim eintrag in einer kleinen vorschau dargestellt werden sollen aber auch vergrößert dargestellt werden können bilder eines tagebucheintrages können jederzeit ergänzt bis max ersetzt oder gelöscht werden | 1 |
7,412 | 10,660,784,056 | IssuesEvent | 2019-10-18 10:42:44 | ferra-rally/concert-scout | https://api.github.com/repos/ferra-rally/concert-scout | opened | Participation list | Functional requirement | The system shall provide all users the view on-screen of all user’s friends that take part in an event, selected by the user. | 1.0 | Participation list - The system shall provide all users the view on-screen of all user’s friends that take part in an event, selected by the user. | requirement | participation list the system shall provide all users the view on screen of all user’s friends that take part in an event selected by the user | 1 |
10,653 | 15,652,390,761 | IssuesEvent | 2021-03-23 11:20:18 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | An option to refresh lock file update PRs when they get stale | priority-5-triage status:requirements type:feature | **What would you like Renovate to be able to do?**
Background discussion: https://github.com/renovatebot/renovate/discussions/9190
At the moment, renovate will not update lock file update PRs unless the "Rebase" checkbox is ticked. Unless the PRs get merged quickly enough, they can go stale. This could result in a delay before the next PR is opened (by default - up to a week).
Renovate should have an option, which would allow to treat it lock file PRs as stale and refresh them without the "Rebase" checbox when enough time has passed.
**Did you already have any implementation ideas?**
A new option under `lockFileMaintenance`: `staleTimeout` (number of days)?
I'd consider working on a PR for this (would probably need some context, as there's usually multiple ways to implement things). | 1.0 | An option to refresh lock file update PRs when they get stale - **What would you like Renovate to be able to do?**
Background discussion: https://github.com/renovatebot/renovate/discussions/9190
At the moment, renovate will not update lock file update PRs unless the "Rebase" checkbox is ticked. Unless the PRs get merged quickly enough, they can go stale. This could result in a delay before the next PR is opened (by default - up to a week).
Renovate should have an option, which would allow to treat it lock file PRs as stale and refresh them without the "Rebase" checbox when enough time has passed.
**Did you already have any implementation ideas?**
A new option under `lockFileMaintenance`: `staleTimeout` (number of days)?
I'd consider working on a PR for this (would probably need some context, as there's usually multiple ways to implement things). | requirement | an option to refresh lock file update prs when they get stale what would you like renovate to be able to do background discussion at the moment renovate will not update lock file update prs unless the rebase checkbox is ticked unless the prs get merged quickly enough they can go stale this could result in a delay before the next pr is opened by default up to a week renovate should have an option which would allow to treat it lock file prs as stale and refresh them without the rebase checbox when enough time has passed did you already have any implementation ideas a new option under lockfilemaintenance staletimeout number of days i d consider working on a pr for this would probably need some context as there s usually multiple ways to implement things | 1 |
97,396 | 3,992,286,478 | IssuesEvent | 2016-05-10 00:41:23 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | Add `kubectl sh`, an interactive shell | component/kubectl priority/P2 team/ux | `kubectl sh` will list pods/containers to choose from, and then spawn a shell (default `/bin/sh`) into that container (a shortcut for `exec`ing into a container for debugging).
@kubernetes/kubectl | 1.0 | Add `kubectl sh`, an interactive shell - `kubectl sh` will list pods/containers to choose from, and then spawn a shell (default `/bin/sh`) into that container (a shortcut for `exec`ing into a container for debugging).
@kubernetes/kubectl | non_requirement | add kubectl sh an interactive shell kubectl sh will list pods containers to choose from and then spawn a shell default bin sh into that container a shortcut for exec ing into a container for debugging kubernetes kubectl | 0 |
6,087 | 8,712,755,105 | IssuesEvent | 2018-12-06 23:23:27 | zlavere/GroupCMosaicMaker | https://api.github.com/repos/zlavere/GroupCMosaicMaker | closed | Implement a button that will produce and display the mosaic. | part 1 requirement | Both the source image and the resulting mosaic image should be visible in the application.
Anytime this button is invoked a new mosaic image should be produced and displayed.
This button should only be enabled when it would produce a new mosaic. For example,
if the user has already produced the mosaic image and has not changed the source
image, grid size, or any other item that could affect the resulting mosaic, then this
button should be disabled. | 1.0 | Implement a button that will produce and display the mosaic. - Both the source image and the resulting mosaic image should be visible in the application.
Anytime this button is invoked a new mosaic image should be produced and displayed.
This button should only be enabled when it would produce a new mosaic. For example,
if the user has already produced the mosaic image and has not changed the source
image, grid size, or any other item that could affect the resulting mosaic, then this
button should be disabled. | requirement | implement a button that will produce and display the mosaic both the source image and the resulting mosaic image should be visible in the application anytime this button is invoked a new mosaic image should be produced and displayed this button should only be enabled when it would produce a new mosaic for example if the user has already produced the mosaic image and has not changed the source image grid size or any other item that could affect the resulting mosaic then this button should be disabled | 1 |
15,417 | 27,153,622,698 | IssuesEvent | 2023-02-17 05:03:34 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | Add RPM versioning | type:feature status:requirements priority-5-triage | ### What would you like Renovate to be able to do?
Renovate should be able to parse and compare RPM versions. RPM versioning scheme is used for RPM based Linux distributions like Red Hat Linux, Rocky, or Mariner.
Note that this is different from [RedHat versioning](https://github.com/renovatebot/renovate/blob/main/lib/modules/versioning/redhat/index.ts)
### If you have any ideas on how this should be implemented, please tell us here.
I've previously implemented RPM versioning in C# and should be able to write it again in TypeScript.
### Is this a feature you are interested in implementing yourself?
Yes | 1.0 | Add RPM versioning - ### What would you like Renovate to be able to do?
Renovate should be able to parse and compare RPM versions. RPM versioning scheme is used for RPM based Linux distributions like Red Hat Linux, Rocky, or Mariner.
Note that this is different from [RedHat versioning](https://github.com/renovatebot/renovate/blob/main/lib/modules/versioning/redhat/index.ts)
### If you have any ideas on how this should be implemented, please tell us here.
I've previously implemented RPM versioning in C# and should be able to write it again in TypeScript.
### Is this a feature you are interested in implementing yourself?
Yes | requirement | add rpm versioning what would you like renovate to be able to do renovate should be able to parse and compare rpm versions rpm versioning scheme is used for rpm based linux distributions like red hat linux rocky or mariner note that this is different from if you have any ideas on how this should be implemented please tell us here i ve previously implemented rpm versioning in c and should be able to write it again in typescript is this a feature you are interested in implementing yourself yes | 1 |