Unnamed: 0
int64
0
1.97k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringdate
2015-01-02 17:29:17
2023-09-25 12:49:49
repo
stringclasses
857 values
repo_url
stringclasses
857 values
action
stringclasses
3 values
title
stringlengths
3
238
labels
stringlengths
9
347
body
stringlengths
4
84.9k
index
float64
1
7
text_combine
stringlengths
30
85k
label
stringclasses
13 values
text
stringlengths
13
80.4k
0
2,562,691,887
IssuesEvent
2015-02-06 05:04:33
NordikSoft/minesweeper
https://api.github.com/repos/NordikSoft/minesweeper
closed
add tests for the command line flag parser
architecture debt task
Make the command line parser separated from the rest of the application so that it can be published and tested. Write test cases for the utility to cover the following cases: * all six of the arguments are called and captured * calling with unknown values * calling with garbled data * calling with impossible values (negative, more mines than spots, zero, ...) * any other tests that seem worthwhile Modify the functionality so that the values are checked on run, before passing them to the board curator.
1
add tests for the command line flag parser - Make the command line parser separated from the rest of the application so that it can be published and tested. Write test cases for the utility to cover the following cases: * all six of the arguments are called and captured * calling with unknown values * calling with garbled data * calling with impossible values (negative, more mines than spots, zero, ...) * any other tests that seem worthwhile Modify the functionality so that the values are checked on run, before passing them to the board curator.
architecture
add tests for the command line flag parser make the command line parser separated from the rest of the application so that it can be published and tested write test cases for the utility to cover the following cases all six of the arguments are called and captured calling with unknown values calling with garbled data calling with impossible values negative more mines than spots zero any other tests that seem worthwhile modify the functionality so that the values are checked on run before passing them to the board curator
1
2,565,130,473
IssuesEvent
2015-02-07 02:15:41
NordikSoft/minesweeper
https://api.github.com/repos/NordikSoft/minesweeper
closed
move constant values to header files
architecture debt task
Move the constant values into header files to reduce the need for recompilation of source files and ease re-use of values.
1
move constant values to header files - Move the constant values into header files to reduce the need for recompilation of source files and ease re-use of values.
architecture
move constant values to header files move the constant values into header files to reduce the need for recompilation of source files and ease re use of values
2
3,090,544,695
IssuesEvent
2015-08-26 07:31:47
ravaj-group/core-issues
https://api.github.com/repos/ravaj-group/core-issues
opened
[AdminBundle] Improve general performance
architecture technical debt
There are many things we can improve for a better general performance of the panel, such as: - [ ] Cache assets (javascripts, translations, etc) in users browser and load them only once. We can show a progress bar on first load so user understands why it's taking so long to load. - [ ] Investigate directives and scopes using profiling tools to find performance bottlenecks.
1
[AdminBundle] Improve general performance - There are many things we can improve for a better general performance of the panel, such as: - [ ] Cache assets (javascripts, translations, etc) in users browser and load them only once. We can show a progress bar on first load so user understands why it's taking so long to load. - [ ] Investigate directives and scopes using profiling tools to find performance bottlenecks.
architecture
improve general performance there are many things we can improve for a better general performance of the panel such as cache assets javascripts translations etc in users browser and load them only once we can show a progress bar on first load so user understands why it s taking so long to load investigate directives and scopes using profiling tools to find performance bottlenecks
3
4,269,642,997
IssuesEvent
2016-07-13 01:41:27
jakimber/TDiary
https://api.github.com/repos/jakimber/TDiary
closed
Shotgun Surgery
architecture question tech debt
Added "Rating" to a Diary Item and it required too many file changes to implement fully. There has to be a better way!
1
Shotgun Surgery - Added "Rating" to a Diary Item and it required too many file changes to implement fully. There has to be a better way!
architecture
shotgun surgery added rating to a diary item and it required too many file changes to implement fully there has to be a better way
5
7,840,527,338
IssuesEvent
2018-06-18 16:37:48
AlwaysInMind/aim-web-app
https://api.github.com/repos/AlwaysInMind/aim-web-app
closed
Make AI description fetch optional
architecture technical debt
The Azure cognitive API used to get the photo descriptions is a paid service and always called. If the Caption option is off then the description is not required so should not be fetched. - [ ] API request to server needs an option - [ ] API is set at module load so need to re-architecture to use dynamic - like IDs
1
Make AI description fetch optional - The Azure cognitive API used to get the photo descriptions is a paid service and always called. If the Caption option is off then the description is not required so should not be fetched. - [ ] API request to server needs an option - [ ] API is set at module load so need to re-architecture to use dynamic - like IDs
architecture
make ai description fetch optional the azure cognitive api used to get the photo descriptions is a paid service and always called if the caption option is off then the description is not required so should not be fetched api request to server needs an option api is set at module load so need to re architecture to use dynamic like ids
8
8,576,385,544
IssuesEvent
2018-11-12 20:13:05
fga-eps-mds/2018.2-GamesBI
https://api.github.com/repos/fga-eps-mds/2018.2-GamesBI
closed
Retirar app importdata do CrossData
0 - Architecture 1 - API 2 - Bug 2 - Technical Viability 3 - Debt 4 - Evolution 6 - Python/Flask
## DescriΓ§Γ£o <!--- Describe the reason of the issue, what is the problem you want to solve, the bug you want to fix, etc --> ApΓ³s a atualizaΓ§Γ£o da arquitetura do projeto, o app 'importdata' nΓ£o teve mais utilidade, portanto deve ser retirado. ## Tarefas - [x] TransferΓͺncias das models para o app API - [x] TransferΓͺncias dos serializers para o app API - [x] Retirar todas as menΓ§Γ΅es ao importdata no projeto - [x] Apagar o app importdata - [x] Gerar novas fixtures ## CritΓ©rios de aceitaΓ§Γ£o <!--- describe what needs to be done so this issue can be closed --> O projeto deve continuar funcionando normalmente ## Como isso vai beneficiar o projeto? IrΓ‘ trazer maior organizaΓ§Γ£o para o projeto
1
Retirar app importdata do CrossData - ## DescriΓ§Γ£o <!--- Describe the reason of the issue, what is the problem you want to solve, the bug you want to fix, etc --> ApΓ³s a atualizaΓ§Γ£o da arquitetura do projeto, o app 'importdata' nΓ£o teve mais utilidade, portanto deve ser retirado. ## Tarefas - [x] TransferΓͺncias das models para o app API - [x] TransferΓͺncias dos serializers para o app API - [x] Retirar todas as menΓ§Γ΅es ao importdata no projeto - [x] Apagar o app importdata - [x] Gerar novas fixtures ## CritΓ©rios de aceitaΓ§Γ£o <!--- describe what needs to be done so this issue can be closed --> O projeto deve continuar funcionando normalmente ## Como isso vai beneficiar o projeto? IrΓ‘ trazer maior organizaΓ§Γ£o para o projeto
architecture
retirar app importdata do crossdata descriΓ§Γ£o apΓ³s a atualizaΓ§Γ£o da arquitetura do projeto o app importdata nΓ£o teve mais utilidade portanto deve ser retirado tarefas transferΓͺncias das models para o app api transferΓͺncias dos serializers para o app api retirar todas as menΓ§Γ΅es ao importdata no projeto apagar o app importdata gerar novas fixtures critΓ©rios de aceitaΓ§Γ£o o projeto deve continuar funcionando normalmente como isso vai beneficiar o projeto irΓ‘ trazer maior organizaΓ§Γ£o para o projeto
11
9,186,831,666
IssuesEvent
2019-03-06 00:17:48
vmware/vic
https://api.github.com/repos/vmware/vic
closed
Structured errors
Epic kind/architecture kind/debt kind/enhancement priority/p4 resolution/will-not-fix
https://github.com/vmware/vic/pull/1708/files#diff-20cac0652b997c8f1ee1d0fb89688480R706 is an example what happens when we don't define errors as types. We should implement a set of types like net package (https://golang.org/pkg/net/) and start using type assertions #3636 should be defining these errors as an integral part of the interface work for the portlayer.
1
Structured errors - https://github.com/vmware/vic/pull/1708/files#diff-20cac0652b997c8f1ee1d0fb89688480R706 is an example what happens when we don't define errors as types. We should implement a set of types like net package (https://golang.org/pkg/net/) and start using type assertions #3636 should be defining these errors as an integral part of the interface work for the portlayer.
architecture
structured errors is an example what happens when we don t define errors as types we should implement a set of types like net package and start using type assertions should be defining these errors as an integral part of the interface work for the portlayer
12
9,383,358,987
IssuesEvent
2019-04-05 03:01:32
fga-eps-mds/2019.1-Wendy
https://api.github.com/repos/fga-eps-mds/2019.1-Wendy
closed
Criar roadmap de cada frente
Architecture Owner Devops EPS Product Owner Tech Lead Technical Debt
Nessa issue serΓ‘ realizado: - [x] DevOps deve criar o seu roadmap para a primeira release - [x] Architecture Owner deve criar o seu roadmap para a primeira release - [x] Product Owner deve criar o seu roadmap para a primeira release - [x] Tech Lead deve criar o seu roadmap para a primeira release CritΓ©rios de aceitaΓ§Γ£o: - [x] Estar documentado na wiki - [x] Cada papel com sua cor de acordo com a paleta de cores do projeto - [x] Todas as sprints da R1 preenchidas
1
Criar roadmap de cada frente - Nessa issue serΓ‘ realizado: - [x] DevOps deve criar o seu roadmap para a primeira release - [x] Architecture Owner deve criar o seu roadmap para a primeira release - [x] Product Owner deve criar o seu roadmap para a primeira release - [x] Tech Lead deve criar o seu roadmap para a primeira release CritΓ©rios de aceitaΓ§Γ£o: - [x] Estar documentado na wiki - [x] Cada papel com sua cor de acordo com a paleta de cores do projeto - [x] Todas as sprints da R1 preenchidas
architecture
criar roadmap de cada frente nessa issue serΓ‘ realizado devops deve criar o seu roadmap para a primeira release architecture owner deve criar o seu roadmap para a primeira release product owner deve criar o seu roadmap para a primeira release tech lead deve criar o seu roadmap para a primeira release critΓ©rios de aceitaΓ§Γ£o estar documentado na wiki cada papel com sua cor de acordo com a paleta de cores do projeto todas as sprints da preenchidas
13
9,478,932,162
IssuesEvent
2019-04-20 02:59:46
fga-eps-mds/2019.1-Gaia
https://api.github.com/repos/fga-eps-mds/2019.1-Gaia
closed
Definir como serΓ‘ a integraΓ§Γ£o entre os microsserviΓ§os
Architecture Owner EPS Project Backlog Technical Debt
Definir como serΓ‘ a integraΓ§Γ£o entre os microsserviΓ§os Nessa issue serΓ‘ realizado: - [x] Explicar como a integraΓ§Γ£o entre microsserviΓ§os serΓ‘ feita - [x] Justificar a escolha de integraΓ§Γ£o CritΓ©rios de aceitaΓ§Γ£o: - [x] Estar documentado na wiki
1
Definir como serΓ‘ a integraΓ§Γ£o entre os microsserviΓ§os - Definir como serΓ‘ a integraΓ§Γ£o entre os microsserviΓ§os Nessa issue serΓ‘ realizado: - [x] Explicar como a integraΓ§Γ£o entre microsserviΓ§os serΓ‘ feita - [x] Justificar a escolha de integraΓ§Γ£o CritΓ©rios de aceitaΓ§Γ£o: - [x] Estar documentado na wiki
architecture
definir como serΓ‘ a integraΓ§Γ£o entre os microsserviΓ§os definir como serΓ‘ a integraΓ§Γ£o entre os microsserviΓ§os nessa issue serΓ‘ realizado explicar como a integraΓ§Γ£o entre microsserviΓ§os serΓ‘ feita justificar a escolha de integraΓ§Γ£o critΓ©rios de aceitaΓ§Γ£o estar documentado na wiki
14
9,975,770,741
IssuesEvent
2019-07-09 13:46:51
woocommerce/woocommerce-android
https://api.github.com/repos/woocommerce/woocommerce-android
closed
Convert to using Android Navigation Components
AndroidX Architecture Tech Debt
I met with a google engineer during Google I/O to talk about some of the issues we've been having with managing fragment states in this woo app. The grand takeaways from my session were: 1. Everyone has issues with managing the fragment lifecycle, even developers at Google. 2. This is precisely why the android team created the jetpack navigation components. So the recommendation was to switch our architecture over to using these new components now that they are in production.
1
Convert to using Android Navigation Components - I met with a google engineer during Google I/O to talk about some of the issues we've been having with managing fragment states in this woo app. The grand takeaways from my session were: 1. Everyone has issues with managing the fragment lifecycle, even developers at Google. 2. This is precisely why the android team created the jetpack navigation components. So the recommendation was to switch our architecture over to using these new components now that they are in production.
architecture
convert to using android navigation components i met with a google engineer during google i o to talk about some of the issues we ve been having with managing fragment states in this woo app the grand takeaways from my session were everyone has issues with managing the fragment lifecycle even developers at google this is precisely why the android team created the jetpack navigation components so the recommendation was to switch our architecture over to using these new components now that they are in production
15
10,878,807,026
IssuesEvent
2019-11-16 20:16:39
fga-eps-mds/2019.2-Acacia
https://api.github.com/repos/fga-eps-mds/2019.2-Acacia
closed
Configurar ambiente de produΓ§Γ£o
EPS architecture devops technical debt technical story
**DescriΓ§Γ£o** Eu, como ***devops***, gostaria de ***configurar o ambiente de produΓ§Γ£o*** para ***disponibilizar de forma estΓ‘vel o produto***. **Depende de:** #79 **CritΓ©rios de AceitaΓ§Γ£o** - DeverΓ‘ ter o campo ...; - DeverΓ‘ calcular/apresentar ...; - Funcionalidade testada; **Tarefas** - [ ] NΓ£o Γ© um Γ©pico. [Leia](https://sitecampus.com.br/user-story-epico-e-tema-qual-diferenca/); - [ ] Γ‰ testΓ‘vel; - [ ] Γ‰ estimΓ‘vel pelo time de desenvolvimento; - [ ] Traz valor ao negΓ³cio; **ObservaΓ§Γ΅es** - A issue deve ser pontuada; - A issue deve ser delegada a alguΓ©m; - A issue deve ter labels;
1
Configurar ambiente de produΓ§Γ£o - **DescriΓ§Γ£o** Eu, como ***devops***, gostaria de ***configurar o ambiente de produΓ§Γ£o*** para ***disponibilizar de forma estΓ‘vel o produto***. **Depende de:** #79 **CritΓ©rios de AceitaΓ§Γ£o** - DeverΓ‘ ter o campo ...; - DeverΓ‘ calcular/apresentar ...; - Funcionalidade testada; **Tarefas** - [ ] NΓ£o Γ© um Γ©pico. [Leia](https://sitecampus.com.br/user-story-epico-e-tema-qual-diferenca/); - [ ] Γ‰ testΓ‘vel; - [ ] Γ‰ estimΓ‘vel pelo time de desenvolvimento; - [ ] Traz valor ao negΓ³cio; **ObservaΓ§Γ΅es** - A issue deve ser pontuada; - A issue deve ser delegada a alguΓ©m; - A issue deve ter labels;
architecture
configurar ambiente de produΓ§Γ£o descriΓ§Γ£o eu como devops gostaria de configurar o ambiente de produΓ§Γ£o para disponibilizar de forma estΓ‘vel o produto depende de critΓ©rios de aceitaΓ§Γ£o deverΓ‘ ter o campo deverΓ‘ calcular apresentar funcionalidade testada tarefas nΓ£o Γ© um Γ©pico Γ© testΓ‘vel Γ© estimΓ‘vel pelo time de desenvolvimento traz valor ao negΓ³cio observaΓ§Γ΅es a issue deve ser pontuada a issue deve ser delegada a alguΓ©m a issue deve ter labels
16
10,932,557,749
IssuesEvent
2019-11-23 18:35:49
fga-eps-mds/2019.2-Acacia
https://api.github.com/repos/fga-eps-mds/2019.2-Acacia
closed
Configurar entrega contΓ­nua
EPS architecture devops technical debt technical story
**DescriΓ§Γ£o** Eu, como ***devops***, gostaria de ***configurar a entrega contΓ­nua*** para ***automatizar a entrega do produto***. **CritΓ©rios de AceitaΓ§Γ£o** - Pushs na branch master ativando o workflow de deploy do ambiente de produΓ§Γ£o **Tarefas** - [x] Configurar workflow do github actions **ObservaΓ§Γ΅es** - Essa issue depende da configuraΓ§Γ£o do ambiente de produΓ§Γ£o para ser validada por completo
1
Configurar entrega contΓ­nua - **DescriΓ§Γ£o** Eu, como ***devops***, gostaria de ***configurar a entrega contΓ­nua*** para ***automatizar a entrega do produto***. **CritΓ©rios de AceitaΓ§Γ£o** - Pushs na branch master ativando o workflow de deploy do ambiente de produΓ§Γ£o **Tarefas** - [x] Configurar workflow do github actions **ObservaΓ§Γ΅es** - Essa issue depende da configuraΓ§Γ£o do ambiente de produΓ§Γ£o para ser validada por completo
architecture
configurar entrega contΓ­nua descriΓ§Γ£o eu como devops gostaria de configurar a entrega contΓ­nua para automatizar a entrega do produto critΓ©rios de aceitaΓ§Γ£o pushs na branch master ativando o workflow de deploy do ambiente de produΓ§Γ£o tarefas configurar workflow do github actions observaΓ§Γ΅es essa issue depende da configuraΓ§Γ£o do ambiente de produΓ§Γ£o para ser validada por completo
18
12,060,151,218
IssuesEvent
2020-04-15 20:38:12
COVID-19-electronic-health-system/Corona-tracker
https://api.github.com/repos/COVID-19-electronic-health-system/Corona-tracker
closed
Create QA client in CoronaTracker AWS
architecture tech debt v2
## Summary - [ ] Create a new S3 bucket containing the most up-to-date copy of the site - [ ] Share this with the community, preferably pinning it in large Discord groups and potentially adding it to the README (reach out to @whoabuddy for this) ## Motivation While longterm we may look to add more environments with separate test backends and all, but for the time being, a simple separate client where we will deploy in order to test functionality before deploy to should will suffice. This also fully removes my personal AWS from the picture. ## Describe alternatives you've considered Waiting until we have a separate QA/Staging backend in order to implement this, but we really should have somewhere to test the client as that's currently the most heavily-under-development aspect of CoronaTracker
1
Create QA client in CoronaTracker AWS - ## Summary - [ ] Create a new S3 bucket containing the most up-to-date copy of the site - [ ] Share this with the community, preferably pinning it in large Discord groups and potentially adding it to the README (reach out to @whoabuddy for this) ## Motivation While longterm we may look to add more environments with separate test backends and all, but for the time being, a simple separate client where we will deploy in order to test functionality before deploy to should will suffice. This also fully removes my personal AWS from the picture. ## Describe alternatives you've considered Waiting until we have a separate QA/Staging backend in order to implement this, but we really should have somewhere to test the client as that's currently the most heavily-under-development aspect of CoronaTracker
architecture
create qa client in coronatracker aws summary create a new bucket containing the most up to date copy of the site share this with the community preferably pinning it in large discord groups and potentially adding it to the readme reach out to whoabuddy for this motivation while longterm we may look to add more environments with separate test backends and all but for the time being a simple separate client where we will deploy in order to test functionality before deploy to should will suffice this also fully removes my personal aws from the picture describe alternatives you ve considered waiting until we have a separate qa staging backend in order to implement this but we really should have somewhere to test the client as that s currently the most heavily under development aspect of coronatracker
19
12,237,227,972
IssuesEvent
2020-05-04 17:39:59
eurofurence/ef-app_ios
https://api.github.com/repos/eurofurence/ef-app_ios
closed
Swap out ApplicationDirector for content routers
rearchitecture technical debt
The ApplicationDirector is pretty huge/gross as it has to handle actions explicitly from modules alongside deep linking. Moving to a router-based system will make things a lot more flexible/sane
1
Swap out ApplicationDirector for content routers - The ApplicationDirector is pretty huge/gross as it has to handle actions explicitly from modules alongside deep linking. Moving to a router-based system will make things a lot more flexible/sane
architecture
swap out applicationdirector for content routers the applicationdirector is pretty huge gross as it has to handle actions explicitly from modules alongside deep linking moving to a router based system will make things a lot more flexible sane
20
12,301,448,147
IssuesEvent
2020-05-11 15:25:37
kids-first/kf-portal-ui
https://api.github.com/repos/kids-first/kf-portal-ui
closed
Persona: Migrate storage from MongoDB to DocumentDB
architecture backend tech debt to groom
As devops admins, we want a system that is as easy as possible to deploy. Our devops asked us to move away from MongoDB as much as possible. We can then discuss moving to DocumentDB, or another backend. ## Acceptance criteria - Migrate Persona so that its storage backend is DocumentDB, not MongoDB.
1
Persona: Migrate storage from MongoDB to DocumentDB - As devops admins, we want a system that is as easy as possible to deploy. Our devops asked us to move away from MongoDB as much as possible. We can then discuss moving to DocumentDB, or another backend. ## Acceptance criteria - Migrate Persona so that its storage backend is DocumentDB, not MongoDB.
architecture
persona migrate storage from mongodb to documentdb as devops admins we want a system that is as easy as possible to deploy our devops asked us to move away from mongodb as much as possible we can then discuss moving to documentdb or another backend acceptance criteria migrate persona so that its storage backend is documentdb not mongodb
21
13,246,793,308
IssuesEvent
2020-08-19 16:12:56
dusk-network/dusk-blindbid
https://api.github.com/repos/dusk-network/dusk-blindbid
closed
Migrate the lib circuit deps to plonk_gadgets
area:architecture type:tech-debt
Since we ported to the repo https://github.com/dusk-network/plonk_gadgets all of the general-purpose gadgets we currently have. It would be nice to migrate the library to use them as a dependency. So we reduce the code and the responsabilities of this repo to just hold the logic for the blindbid ops.
1
Migrate the lib circuit deps to plonk_gadgets - Since we ported to the repo https://github.com/dusk-network/plonk_gadgets all of the general-purpose gadgets we currently have. It would be nice to migrate the library to use them as a dependency. So we reduce the code and the responsabilities of this repo to just hold the logic for the blindbid ops.
architecture
migrate the lib circuit deps to plonk gadgets since we ported to the repo all of the general purpose gadgets we currently have it would be nice to migrate the library to use them as a dependency so we reduce the code and the responsabilities of this repo to just hold the logic for the blindbid ops
23
13,657,383,859
IssuesEvent
2020-09-28 05:36:33
infinyon/fluvio
https://api.github.com/repos/infinyon/fluvio
closed
PartitionStatus mapping error
Kubernetes doc/architecture technical debt
platform: `ubuntu-18` steps to reproduce: ```make smoke-test-tls``` log: ``` fluvio_stream_dispatcher::dispatcher::k8_ws_service: invalid type: string "Failure", expected struct PartitionStatus at line 7 column 21 Sep 23 17:13:48.020 ERROR flv_tls_proxy: error copying: Broken pipe (os error 32) Sep 23 17:13:48.098 ERROR k8_client::native::client: error decoding raw stream : "message": "Operation cannot be fulfilled on partitions.fluvio.infinyon.com \"topic2-0\": the object has been modified; please apply your changes to the latest version and try again", "reason": "Conflict", "details": { "name": "topic2-0", "group": "fluvio.infinyon.com", "kind": "partitions" }, "code": 409 } ``` issue: Error is not properly mapped into Status.
1
PartitionStatus mapping error - platform: `ubuntu-18` steps to reproduce: ```make smoke-test-tls``` log: ``` fluvio_stream_dispatcher::dispatcher::k8_ws_service: invalid type: string "Failure", expected struct PartitionStatus at line 7 column 21 Sep 23 17:13:48.020 ERROR flv_tls_proxy: error copying: Broken pipe (os error 32) Sep 23 17:13:48.098 ERROR k8_client::native::client: error decoding raw stream : "message": "Operation cannot be fulfilled on partitions.fluvio.infinyon.com \"topic2-0\": the object has been modified; please apply your changes to the latest version and try again", "reason": "Conflict", "details": { "name": "topic2-0", "group": "fluvio.infinyon.com", "kind": "partitions" }, "code": 409 } ``` issue: Error is not properly mapped into Status.
architecture
partitionstatus mapping error platform ubuntu steps to reproduce make smoke test tls log fluvio stream dispatcher dispatcher ws service invalid type string failure expected struct partitionstatus at line column sep error flv tls proxy error copying broken pipe os error sep error client native client error decoding raw stream message operation cannot be fulfilled on partitions fluvio infinyon com the object has been modified please apply your changes to the latest version and try again reason conflict details name group fluvio infinyon com kind partitions code issue error is not properly mapped into status
24
13,683,302,912
IssuesEvent
2020-09-30 01:24:35
woocommerce/woocommerce-ios
https://api.github.com/repos/woocommerce/woocommerce-ios
closed
StorageType+Woo: siteID is part of the pK!
Architecture Tech Debt
### Details: `StorageType+Woo` implements a set of methods used by the OrderStore. We need to extend those methods, so that the **siteID** is also considered as part of the Order's primary key.
1
StorageType+Woo: siteID is part of the pK! - ### Details: `StorageType+Woo` implements a set of methods used by the OrderStore. We need to extend those methods, so that the **siteID** is also considered as part of the Order's primary key.
architecture
storagetype woo siteid is part of the pk details storagetype woo implements a set of methods used by the orderstore we need to extend those methods so that the siteid is also considered as part of the order s primary key
25
13,876,693,211
IssuesEvent
2020-10-17 00:19:22
infinyon/fluvio
https://api.github.com/repos/infinyon/fluvio
closed
SPU is receiving unnecessary replica update event from SC
Priority - Critical SC SPU bug doc/architecture technical debt
steps reproduce: ``` flvt --produce-iteration 10 ``` In the SPU Log, we see log with: ``` Oct 10 01:39:33.494 DEBUG sc_request_loop:update_replica_request: fluvio_spu::controllers::sc::dispatcher: received replica update from sc: UpdateReplicaRequest { epoch: 0, changes: [], all: [ Replica { id: ReplicaKey { topic: "topic0", partition: 0, }, leader: 5001, replicas: [ 5001, ], }, ], } ``` and in short-time later ``` Oct 10 01:39:33.511 DEBUG sc_request_loop:update_replica_request: fluvio_spu::controllers::sc::dispatcher: received replica update from sc: UpdateReplicaRequest { epoch: 0, changes: [], all: [ Replica { id: ReplicaKey { topic: "topic0", partition: 0, }, leader: 5001, replicas: [ 5001, ], }, ], } ```
1
SPU is receiving unnecessary replica update event from SC - steps reproduce: ``` flvt --produce-iteration 10 ``` In the SPU Log, we see log with: ``` Oct 10 01:39:33.494 DEBUG sc_request_loop:update_replica_request: fluvio_spu::controllers::sc::dispatcher: received replica update from sc: UpdateReplicaRequest { epoch: 0, changes: [], all: [ Replica { id: ReplicaKey { topic: "topic0", partition: 0, }, leader: 5001, replicas: [ 5001, ], }, ], } ``` and in short-time later ``` Oct 10 01:39:33.511 DEBUG sc_request_loop:update_replica_request: fluvio_spu::controllers::sc::dispatcher: received replica update from sc: UpdateReplicaRequest { epoch: 0, changes: [], all: [ Replica { id: ReplicaKey { topic: "topic0", partition: 0, }, leader: 5001, replicas: [ 5001, ], }, ], } ```
architecture
spu is receiving unnecessary replica update event from sc steps reproduce flvt produce iteration in the spu log we see log with oct debug sc request loop update replica request fluvio spu controllers sc dispatcher received replica update from sc updatereplicarequest epoch changes all replica id replicakey topic partition leader replicas and in short time later oct debug sc request loop update replica request fluvio spu controllers sc dispatcher received replica update from sc updatereplicarequest epoch changes all replica id replicakey topic partition leader replicas
26
13,890,138,035
IssuesEvent
2020-10-19 08:52:55
dusk-network/dusk-blockchain
https://api.github.com/repos/dusk-network/dusk-blockchain
closed
Review the mutex's ReadLock to prevent recursive RLock deadlock
area:architecture status:blocker type:bug type:tech-debt
Since RLock should not be used recursively, we need to review all files and make sure we do not incur in this weird situation. A way would be to avoid RLocking in unexported functions
1
Review the mutex's ReadLock to prevent recursive RLock deadlock - Since RLock should not be used recursively, we need to review all files and make sure we do not incur in this weird situation. A way would be to avoid RLocking in unexported functions
architecture
review the mutex s readlock to prevent recursive rlock deadlock since rlock should not be used recursively we need to review all files and make sure we do not incur in this weird situation a way would be to avoid rlocking in unexported functions
27
14,234,716,834
IssuesEvent
2020-11-18 13:57:36
dusk-network/dusk-blockchain
https://api.github.com/repos/dusk-network/dusk-blockchain
closed
Investigate and adjust any RPCBus messages which are currently encoded, but can be passed directly
area:architecture need:investigation type:tech-debt
So far, I have identified: - LastCertificate - GetRoundResults - GetCandidate - GetMempoolTxs
1
Investigate and adjust any RPCBus messages which are currently encoded, but can be passed directly - So far, I have identified: - LastCertificate - GetRoundResults - GetCandidate - GetMempoolTxs
architecture
investigate and adjust any rpcbus messages which are currently encoded but can be passed directly so far i have identified lastcertificate getroundresults getcandidate getmempooltxs
28
14,258,467,990
IssuesEvent
2020-11-20 06:21:06
infinyon/fluvio
https://api.github.com/repos/infinyon/fluvio
opened
tracking: Command Extension Mechanism
CLI Installation Usability doc/architecture enhancement extensions technical debt
A new "Extension" mechanism will replace the current monolithic CLI. Existing CLI commands will move into the following extensions: * Consumer: Topic, Partitions, Consume, Produce * Cluster: SPU, Install, Uninstall, check * Engines: SPU Engine, SC Engine CLI will have commands related to extension management and profile: * List Extension * Install/Update/Uninstall Extensions * Profile * Self Update CLI will no longer have explicit dependencies to any of the extensions. An extension can be implemented in any language. The extensions will be be stored in the `extensions` folder in the fluvio configuration folder. CLI will query each extension so it can be show commands to the user.
1
tracking: Command Extension Mechanism - A new "Extension" mechanism will replace the current monolithic CLI. Existing CLI commands will move into the following extensions: * Consumer: Topic, Partitions, Consume, Produce * Cluster: SPU, Install, Uninstall, check * Engines: SPU Engine, SC Engine CLI will have commands related to extension management and profile: * List Extension * Install/Update/Uninstall Extensions * Profile * Self Update CLI will no longer have explicit dependencies to any of the extensions. An extension can be implemented in any language. The extensions will be be stored in the `extensions` folder in the fluvio configuration folder. CLI will query each extension so it can be show commands to the user.
architecture
tracking command extension mechanism a new extension mechanism will replace the current monolithic cli existing cli commands will move into the following extensions consumer topic partitions consume produce cluster spu install uninstall check engines spu engine sc engine cli will have commands related to extension management and profile list extension install update uninstall extensions profile self update cli will no longer have explicit dependencies to any of the extensions an extension can be implemented in any language the extensions will be be stored in the extensions folder in the fluvio configuration folder cli will query each extension so it can be show commands to the user
30
14,762,040,121
IssuesEvent
2021-01-09 01:30:56
woocommerce/woocommerce-ios
https://api.github.com/repos/woocommerce/woocommerce-ios
closed
Review: ViewModels
Architecture Tech Debt [Type] Enhancement
### Details: Analyze and (if possible) replace the ViewModels with extensions / cell configuration methods.
1
Review: ViewModels - ### Details: Analyze and (if possible) replace the ViewModels with extensions / cell configuration methods.
architecture
review viewmodels details analyze and if possible replace the viewmodels with extensions cell configuration methods
32
15,226,667,288
IssuesEvent
2021-02-18 09:11:58
dusk-network/rusk
https://api.github.com/repos/dusk-network/rusk
closed
Implement a Circuit selector based on VerifierKeys for The Host
area: genesis-contracts area:architecture area:cryptography team:Core type:refactor type:tech-debt
On the current design that is being done in #51 there is no support for handling `VerifierKey`s as arguments and match over them to use the appropiate `Circuit` struct to verify the `Proof`s on an easy way. The goal is to match inside the `verify_proof` host function implemented inside `RuskExternals` over the hash of the `VerifierKey` (`H(VerifierKey)`) which is also the name of the file where the circuit-related data is stored. We should also bear in mind that we will need to build some kind of map/connection between circuit structures and it's hash namefiles in order to be able to match from the hash of a `VerifierKey` into the Circuit that we want to use.
1
Implement a Circuit selector based on VerifierKeys for The Host - On the current design that is being done in #51 there is no support for handling `VerifierKey`s as arguments and match over them to use the appropiate `Circuit` struct to verify the `Proof`s on an easy way. The goal is to match inside the `verify_proof` host function implemented inside `RuskExternals` over the hash of the `VerifierKey` (`H(VerifierKey)`) which is also the name of the file where the circuit-related data is stored. We should also bear in mind that we will need to build some kind of map/connection between circuit structures and it's hash namefiles in order to be able to match from the hash of a `VerifierKey` into the Circuit that we want to use.
architecture
implement a circuit selector based on verifierkeys for the host on the current design that is being done in there is no support for handling verifierkey s as arguments and match over them to use the appropiate circuit struct to verify the proof s on an easy way the goal is to match inside the verify proof host function implemented inside ruskexternals over the hash of the verifierkey h verifierkey which is also the name of the file where the circuit related data is stored we should also bear in mind that we will need to build some kind of map connection between circuit structures and it s hash namefiles in order to be able to match from the hash of a verifierkey into the circuit that we want to use
33
15,708,778,763
IssuesEvent
2021-03-26 21:08:43
woocommerce/woocommerce-ios
https://api.github.com/repos/woocommerce/woocommerce-ios
closed
Networking: Invalid Token Handler
Architecture Tech Debt type: enhancement type: task
Update the the Networking layer so that any Authentication Error is properly parsed, and relayed thru a common channel.
1
Networking: Invalid Token Handler - Update the the Networking layer so that any Authentication Error is properly parsed, and relayed thru a common channel.
architecture
networking invalid token handler update the the networking layer so that any authentication error is properly parsed and relayed thru a common channel
34
16,175,902,189
IssuesEvent
2021-05-03 06:42:23
woocommerce/woocommerce-ios
https://api.github.com/repos/woocommerce/woocommerce-ios
closed
Allow Testing of Localized Strings
Architecture Tech Debt type: task
There are times when you would like to test a localized string. For example, you may want to make sure that the quantity and price are the values attached to the final string of this: ```swift enum Localization { static func subtitle(quantity: String, price: String) -> String { let format = NSLocalizedString("%1$@ x %2$@", comment: "") return String.localizedStringWithFormat(format, quantity, price) } } ``` We can technically do this now. But if the tests are running on a non-English simulator, then the test would fail because the `NSLocalizedString()` call would return the localized value. ## Possible Solutions We'd probably have to create our own localization functions. These functions can probably allow overrides so that the tests can _expect_ a specific language to be used. We'd probably also need to modify `localize.py` so it will not just look for `NSLocalizedString()` calls. ### Stretch Goal It'd probably be better if we can also organize our localized strings into a single file. Kind of like how [SwiftGen does it](https://github.com/SwiftGen/SwiftGen#strings).
1
Allow Testing of Localized Strings - There are times when you would like to test a localized string. For example, you may want to make sure that the quantity and price are the values attached to the final string of this: ```swift enum Localization { static func subtitle(quantity: String, price: String) -> String { let format = NSLocalizedString("%1$@ x %2$@", comment: "") return String.localizedStringWithFormat(format, quantity, price) } } ``` We can technically do this now. But if the tests are running on a non-English simulator, then the test would fail because the `NSLocalizedString()` call would return the localized value. ## Possible Solutions We'd probably have to create our own localization functions. These functions can probably allow overrides so that the tests can _expect_ a specific language to be used. We'd probably also need to modify `localize.py` so it will not just look for `NSLocalizedString()` calls. ### Stretch Goal It'd probably be better if we can also organize our localized strings into a single file. Kind of like how [SwiftGen does it](https://github.com/SwiftGen/SwiftGen#strings).
architecture
allow testing of localized strings there are times when you would like to test a localized string for example you may want to make sure that the quantity and price are the values attached to the final string of this swift enum localization static func subtitle quantity string price string string let format nslocalizedstring x comment return string localizedstringwithformat format quantity price we can technically do this now but if the tests are running on a non english simulator then the test would fail because the nslocalizedstring call would return the localized value possible solutions we d probably have to create our own localization functions these functions can probably allow overrides so that the tests can expect a specific language to be used we d probably also need to modify localize py so it will not just look for nslocalizedstring calls stretch goal it d probably be better if we can also organize our localized strings into a single file kind of like how
35
17,116,841,368
IssuesEvent
2021-07-11 14:32:30
spacemeshos/go-spacemesh
https://api.github.com/repos/spacemeshos/go-spacemesh
opened
Replace dynamic XDR encoding/ decoding with typed encoding using .x files
After MN architecture technical debt
## Description Currently we use dynamic encoding / decoding of our structs we should move to use statically defined .x files that describes the structs primitive types for better performance.
1
Replace dynamic XDR encoding/ decoding with typed encoding using .x files - ## Description Currently we use dynamic encoding / decoding of our structs we should move to use statically defined .x files that describes the structs primitive types for better performance.
architecture
replace dynamic xdr encoding decoding with typed encoding using x files description currently we use dynamic encoding decoding of our structs we should move to use statically defined x files that describes the structs primitive types for better performance
36
17,261,365,839
IssuesEvent
2021-07-22 08:07:52
dusk-network/rusk-vm
https://api.github.com/repos/dusk-network/rusk-vm
opened
Reduce size of contract arguments
area:architecture team:Core type:enhancement type:tech-debt
At the moment the struct size of `Call` is _1312_, and the encoded size is _2526_. As encoded sizes should in general be smaller than the in-memory representation, we need to understand why it is the opposite here. One potential optimization has to do with avoiding one single `enum` for all possible call arguments. This is a waste of space, since each instance of the enum will necessarily occupy the space of the largest variant.
1
Reduce size of contract arguments - At the moment the struct size of `Call` is _1312_, and the encoded size is _2526_. As encoded sizes should in general be smaller than the in-memory representation, we need to understand why it is the opposite here. One potential optimization has to do with avoiding one single `enum` for all possible call arguments. This is a waste of space, since each instance of the enum will necessarily occupy the space of the largest variant.
architecture
reduce size of contract arguments at the moment the struct size of call is and the encoded size is as encoded sizes should in general be smaller than the in memory representation we need to understand why it is the opposite here one potential optimization has to do with avoiding one single enum for all possible call arguments this is a waste of space since each instance of the enum will necessarily occupy the space of the largest variant
37
17,261,389,259
IssuesEvent
2021-07-22 08:09:44
dusk-network/rusk-vm
https://api.github.com/repos/dusk-network/rusk-vm
opened
Reduce levels of function nesting
area:architecture team:Core type:tech-debt
At the moment, the Transfer Contract contains the `Tree` structure, which contains the `PoseidonTree`, which in turn contains the `NStack`. One hypothesis is that this could lead to nested function calls that could be avoided. This issue is worthy of a deeper research
1
Reduce levels of function nesting - At the moment, the Transfer Contract contains the `Tree` structure, which contains the `PoseidonTree`, which in turn contains the `NStack`. One hypothesis is that this could lead to nested function calls that could be avoided. This issue is worthy of a deeper research
architecture
reduce levels of function nesting at the moment the transfer contract contains the tree structure which contains the poseidontree which in turn contains the nstack one hypothesis is that this could lead to nested function calls that could be avoided this issue is worthy of a deeper research
39
18,164,188,539
IssuesEvent
2021-09-27 13:04:58
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
reopened
Refactor frontend stores for table-data
type: enhancement affects: architecture affects: technical debt work: frontend status: ready restricted: maintainers
## Purpose of this refactor: 1. Our tableData store handles colums, records and meta information needed for parameters and display purposes. It also includes the type definitions for all of them. This makes it cluttered, and there is an overlap of different concerns. A better approach would be to split it into separate stores, and have a parent store which maintains the rest. 2. Currently, logic of certain requirements such passing parameters to table record request, refetching table data when params change etc., are done on the component. This can be side stepped and done entirely by observing store values. This gives a cleaner view/data model. 3. We need to simplify certain display specific stores such as column position map, to improve readability. 4. General improvements, separate out cells from header and row as different components. 5. Update svelte, vite and related packages.
1
Refactor frontend stores for table-data - ## Purpose of this refactor: 1. Our tableData store handles colums, records and meta information needed for parameters and display purposes. It also includes the type definitions for all of them. This makes it cluttered, and there is an overlap of different concerns. A better approach would be to split it into separate stores, and have a parent store which maintains the rest. 2. Currently, logic of certain requirements such passing parameters to table record request, refetching table data when params change etc., are done on the component. This can be side stepped and done entirely by observing store values. This gives a cleaner view/data model. 3. We need to simplify certain display specific stores such as column position map, to improve readability. 4. General improvements, separate out cells from header and row as different components. 5. Update svelte, vite and related packages.
architecture
refactor frontend stores for table data purpose of this refactor our tabledata store handles colums records and meta information needed for parameters and display purposes it also includes the type definitions for all of them this makes it cluttered and there is an overlap of different concerns a better approach would be to split it into separate stores and have a parent store which maintains the rest currently logic of certain requirements such passing parameters to table record request refetching table data when params change etc are done on the component this can be side stepped and done entirely by observing store values this gives a cleaner view data model we need to simplify certain display specific stores such as column position map to improve readability general improvements separate out cells from header and row as different components update svelte vite and related packages
42
19,100,061,270
IssuesEvent
2021-11-29 21:18:53
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
Redirect Request - for VAMC lgbt web pages to new VAMC lgbtq+ web pages.
ia tech-debt VAMC-Upgrade platform-architecture-working-group
### Story As a Veteran I need to be redirected from old LGBT web page to new LGBTQ+ web page, so I assure I have the latest information and health services. ### Type of request - [ ] We are retiring or taking down a page and need to redirect the URL (complete redirect section) - [X] We are changing the URL of an existing page (complete redirect section) - [ ] We need a custom vanity URL (complete vanity URL section) ### Implementation date Immediately. These pages were found to have incorrect URLs on the updated webpages, the new Web pages will take affect ASAP, to mitigate any broken links for Veterans, please process redirects as soon as possible. Thank you. ### Redirects Current URL | Redirect Destination or New URL https://www.va.gov/miami-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/miami-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/houston-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/houston-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/montana-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/montana-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/nebraska-western-iowa-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/nebraska-western-iowa-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/new-jersey-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/new-jersey-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/north-florida-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/north-florida-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/northern-california-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/northern-california-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/oklahoma-city-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/oklahoma-city-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/orlando-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/orlando-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/shreveport-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/shreveport-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/pacific-islands-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/pacific-islands-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/palo-alto-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/palo-alto-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/pittsburgh-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/pittsburgh-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/salem-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/salem-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/salisbury-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/salisbury-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/salt-lake-city-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/salt-lake-city-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/san-francisco-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/san-francisco-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/sioux-falls-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/sioux-falls-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/southeast-louisiana-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/southeast-louisiana-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/southern-nevada-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/southern-nevada-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/syracuse-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/syracuse-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/tuscaloosa-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/tuscaloosa-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/fayetteville-arkansas-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/fayetteville-arkansas-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/west-palm-beach-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/west-palm-beach-health-care/health-services/lgbtq-veteran-care/ ### Vanity URLs N/A **Link to campaign landing page request issue:** ### Process, Roles and Responsibilities - [x] Requesting team: Above information is provided - [x] Requesting team: All appropriate VA stakeholders are notified as appropriate - [ ] IA: Request is vetted and documented and implementation plan is clear - [ ] IA: Request is assigned to appropriate team for implementation - [ ] Implementation team: Work is complete - [ ] Implementation team: Validated in production - [ ] Requesting team: Validates everything is correct in production and closes ticket
1
Redirect Request - for VAMC lgbt web pages to new VAMC lgbtq+ web pages. - ### Story As a Veteran I need to be redirected from old LGBT web page to new LGBTQ+ web page, so I assure I have the latest information and health services. ### Type of request - [ ] We are retiring or taking down a page and need to redirect the URL (complete redirect section) - [X] We are changing the URL of an existing page (complete redirect section) - [ ] We need a custom vanity URL (complete vanity URL section) ### Implementation date Immediately. These pages were found to have incorrect URLs on the updated webpages, the new Web pages will take affect ASAP, to mitigate any broken links for Veterans, please process redirects as soon as possible. Thank you. ### Redirects Current URL | Redirect Destination or New URL https://www.va.gov/miami-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/miami-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/houston-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/houston-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/montana-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/montana-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/nebraska-western-iowa-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/nebraska-western-iowa-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/new-jersey-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/new-jersey-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/north-florida-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/north-florida-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/northern-california-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/northern-california-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/oklahoma-city-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/oklahoma-city-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/orlando-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/orlando-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/shreveport-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/shreveport-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/pacific-islands-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/pacific-islands-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/palo-alto-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/palo-alto-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/pittsburgh-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/pittsburgh-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/salem-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/salem-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/salisbury-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/salisbury-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/salt-lake-city-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/salt-lake-city-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/san-francisco-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/san-francisco-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/sioux-falls-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/sioux-falls-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/southeast-louisiana-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/southeast-louisiana-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/southern-nevada-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/southern-nevada-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/syracuse-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/syracuse-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/tuscaloosa-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/tuscaloosa-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/fayetteville-arkansas-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/fayetteville-arkansas-health-care/health-services/lgbtq-veteran-care/ https://www.va.gov/west-palm-beach-health-care/health-services/lgbt-veteran-care/ | https://www.va.gov/west-palm-beach-health-care/health-services/lgbtq-veteran-care/ ### Vanity URLs N/A **Link to campaign landing page request issue:** ### Process, Roles and Responsibilities - [x] Requesting team: Above information is provided - [x] Requesting team: All appropriate VA stakeholders are notified as appropriate - [ ] IA: Request is vetted and documented and implementation plan is clear - [ ] IA: Request is assigned to appropriate team for implementation - [ ] Implementation team: Work is complete - [ ] Implementation team: Validated in production - [ ] Requesting team: Validates everything is correct in production and closes ticket
architecture
redirect request for vamc lgbt web pages to new vamc lgbtq web pages story as a veteran i need to be redirected from old lgbt web page to new lgbtq web page so i assure i have the latest information and health services type of request we are retiring or taking down a page and need to redirect the url complete redirect section we are changing the url of an existing page complete redirect section we need a custom vanity url complete vanity url section implementation date immediately these pages were found to have incorrect urls on the updated webpages the new web pages will take affect asap to mitigate any broken links for veterans please process redirects as soon as possible thank you redirects current url redirect destination or new url vanity urls n a link to campaign landing page request issue process roles and responsibilities requesting team above information is provided requesting team all appropriate va stakeholders are notified as appropriate ia request is vetted and documented and implementation plan is clear ia request is assigned to appropriate team for implementation implementation team work is complete implementation team validated in production requesting team validates everything is correct in production and closes ticket
43
19,287,426,961
IssuesEvent
2021-12-11 07:05:14
spacemeshos/go-spacemesh
https://api.github.com/repos/spacemeshos/go-spacemesh
closed
Async/on-demand tortoise
Tortoise Protocol technical debt architecture Before MN
Right now, tortoise receives data in several ways: - when Hare finishes processing a layer, its output is sent to `tortoise.HandleIncomingLayer` via `mesh.ValidateLayer` - when a layer is received via sync, the same thing happens - when a late block arrives via sync or gossip, it's sent individually into `tortoise.HandleLateBlocks` All of these are currently handled synchronously: e.g., the syncer is waiting for tortoise to finish running and processing the new layer before it continues syncing. The tortoise should be totally asynchronous. It should be a separate, autonomous "background process" in its own goroutine (like the hare broker). All incoming data should be buffered on channels, and the caller should never wait for it to finish. This would have a few advantages: - easier to process incoming blocks and layers in batches (especially useful for late blocks, see #2412). This is the main motivation for this change. The way in which tortoise processes data should not be tightly coupled to the messages it receives about new blocks and layers. - architecturally, it makes more sense: there's no reason hare, mesh, or syncer should block on tortoise. It makes the API simpler. - makes it a bit simpler to rerun the verifying tortoise from scratch periodically, on its own schedule (without worrying about blocking data providers), or to trigger tortoise once in a while when an accounting of the voting weight of incoming blocks requires it No block data should be passed directly into tortoise (this was done already in #2400). Tortoise should just receive notifications that new data is waiting to be processed. The API should basically just be: - new incoming layer - late block received
1
Async/on-demand tortoise - Right now, tortoise receives data in several ways: - when Hare finishes processing a layer, its output is sent to `tortoise.HandleIncomingLayer` via `mesh.ValidateLayer` - when a layer is received via sync, the same thing happens - when a late block arrives via sync or gossip, it's sent individually into `tortoise.HandleLateBlocks` All of these are currently handled synchronously: e.g., the syncer is waiting for tortoise to finish running and processing the new layer before it continues syncing. The tortoise should be totally asynchronous. It should be a separate, autonomous "background process" in its own goroutine (like the hare broker). All incoming data should be buffered on channels, and the caller should never wait for it to finish. This would have a few advantages: - easier to process incoming blocks and layers in batches (especially useful for late blocks, see #2412). This is the main motivation for this change. The way in which tortoise processes data should not be tightly coupled to the messages it receives about new blocks and layers. - architecturally, it makes more sense: there's no reason hare, mesh, or syncer should block on tortoise. It makes the API simpler. - makes it a bit simpler to rerun the verifying tortoise from scratch periodically, on its own schedule (without worrying about blocking data providers), or to trigger tortoise once in a while when an accounting of the voting weight of incoming blocks requires it No block data should be passed directly into tortoise (this was done already in #2400). Tortoise should just receive notifications that new data is waiting to be processed. The API should basically just be: - new incoming layer - late block received
architecture
async on demand tortoise right now tortoise receives data in several ways when hare finishes processing a layer its output is sent to tortoise handleincominglayer via mesh validatelayer when a layer is received via sync the same thing happens when a late block arrives via sync or gossip it s sent individually into tortoise handlelateblocks all of these are currently handled synchronously e g the syncer is waiting for tortoise to finish running and processing the new layer before it continues syncing the tortoise should be totally asynchronous it should be a separate autonomous background process in its own goroutine like the hare broker all incoming data should be buffered on channels and the caller should never wait for it to finish this would have a few advantages easier to process incoming blocks and layers in batches especially useful for late blocks see this is the main motivation for this change the way in which tortoise processes data should not be tightly coupled to the messages it receives about new blocks and layers architecturally it makes more sense there s no reason hare mesh or syncer should block on tortoise it makes the api simpler makes it a bit simpler to rerun the verifying tortoise from scratch periodically on its own schedule without worrying about blocking data providers or to trigger tortoise once in a while when an accounting of the voting weight of incoming blocks requires it no block data should be passed directly into tortoise this was done already in tortoise should just receive notifications that new data is waiting to be processed the api should basically just be new incoming layer late block received
44
19,292,339,503
IssuesEvent
2021-12-12 01:40:39
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Define common error structure
type: bug affects: architecture affects: technical debt work: backend status: review
## Description Currently, there is no common error structure on the backend. * Most requests with bad input return 500, while they should return 400. * The current 400 errors do not follow a common structure. In some observed cases: * they return a JSON array of strings * they don't return anything * they return a JSON object with a property named `detail` ## Expected behaviour * There needs to be a well defined error structure that is common for all requests. * Errors with bad input should only return status code 400.
1
Define common error structure - ## Description Currently, there is no common error structure on the backend. * Most requests with bad input return 500, while they should return 400. * The current 400 errors do not follow a common structure. In some observed cases: * they return a JSON array of strings * they don't return anything * they return a JSON object with a property named `detail` ## Expected behaviour * There needs to be a well defined error structure that is common for all requests. * Errors with bad input should only return status code 400.
architecture
define common error structure description currently there is no common error structure on the backend most requests with bad input return while they should return the current errors do not follow a common structure in some observed cases they return a json array of strings they don t return anything they return a json object with a property named detail expected behaviour there needs to be a well defined error structure that is common for all requests errors with bad input should only return status code
45
20,367,093,769
IssuesEvent
2022-02-21 07:23:25
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
opened
Assess database type infrastructure
type: enhancement affects: dx affects: architecture affects: technical debt work: backend work: database status: draft type: meta
## Problem <!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.--> As we near the end of the initial round of implementing database types, we should reassess our current setup and look for improvements. Especially since some GSoC projects involve adding types or adding features to types. ## Proposed solution <!-- A clear and concise description of your proposed solution or feature. --> We should make it as easy and obvious as possible to - Add a new custom type - Add support for native PostgreSQL types - Add support for common 3rd-party types (e.g., PostGIS) - add support for `type_options` for a type. ## Additional context <!-- Add any other context or screenshots about the feature request here.--> This is a draft while we figure out what improvements might actually be possible.
1
Assess database type infrastructure - ## Problem <!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.--> As we near the end of the initial round of implementing database types, we should reassess our current setup and look for improvements. Especially since some GSoC projects involve adding types or adding features to types. ## Proposed solution <!-- A clear and concise description of your proposed solution or feature. --> We should make it as easy and obvious as possible to - Add a new custom type - Add support for native PostgreSQL types - Add support for common 3rd-party types (e.g., PostGIS) - add support for `type_options` for a type. ## Additional context <!-- Add any other context or screenshots about the feature request here.--> This is a draft while we figure out what improvements might actually be possible.
architecture
assess database type infrastructure problem as we near the end of the initial round of implementing database types we should reassess our current setup and look for improvements especially since some gsoc projects involve adding types or adding features to types proposed solution we should make it as easy and obvious as possible to add a new custom type add support for native postgresql types add support for common party types e g postgis add support for type options for a type additional context this is a draft while we figure out what improvements might actually be possible
46
20,753,135,542
IssuesEvent
2022-03-15 09:39:51
woocommerce/woocommerce-android
https://api.github.com/repos/woocommerce/woocommerce-android
closed
Remove unused Order list FluxC code
type: task feature: order list category: architecture type: technical debt
Once #1559 is in production for a bit and deemed stable, remove the old unused order list code in FluxC.
1
Remove unused Order list FluxC code - Once #1559 is in production for a bit and deemed stable, remove the old unused order list code in FluxC.
architecture
remove unused order list fluxc code once is in production for a bit and deemed stable remove the old unused order list code in fluxc
47
20,991,626,000
IssuesEvent
2022-03-29 09:48:28
woocommerce/woocommerce-android
https://api.github.com/repos/woocommerce/woocommerce-android
opened
Refactor SitePicker screens to MVVM
category: architecture type: technical debt
This master issue lists the changes we need to migrate the site picker screen to use MVVM. ## FluxC - Migrate fetch supported woo version methods into a suspendable function. - Migrate fetch site settings methods into a suspendable function. - Migrate fetch site product settings methods into a suspendable function. ## Woo - Update existing methods in Woo that fetch site settings & product settings remove usages of [deprecated event bus architecture](https://github.com/wordpress-mobile/WordPress-FluxC-Android/wiki/%5BDeprecated%5D-Architecture). - Since the `SIGN OUT` action is the only action in the site picker that is utilising event bus, I thought of moving it to a separate class for now. It looks like the `AccountStore` is completely in Java and migrating the sign out action to suspendable is out of scope for this task. - Create a `SitePickerRepository` to communicate with FluxC to fetch site related details from the API. - Create a `ViewModel` that handles business logic for the Site picker screens. - Update the fragment classes and remove legacy code.
1
Refactor SitePicker screens to MVVM - This master issue lists the changes we need to migrate the site picker screen to use MVVM. ## FluxC - Migrate fetch supported woo version methods into a suspendable function. - Migrate fetch site settings methods into a suspendable function. - Migrate fetch site product settings methods into a suspendable function. ## Woo - Update existing methods in Woo that fetch site settings & product settings remove usages of [deprecated event bus architecture](https://github.com/wordpress-mobile/WordPress-FluxC-Android/wiki/%5BDeprecated%5D-Architecture). - Since the `SIGN OUT` action is the only action in the site picker that is utilising event bus, I thought of moving it to a separate class for now. It looks like the `AccountStore` is completely in Java and migrating the sign out action to suspendable is out of scope for this task. - Create a `SitePickerRepository` to communicate with FluxC to fetch site related details from the API. - Create a `ViewModel` that handles business logic for the Site picker screens. - Update the fragment classes and remove legacy code.
architecture
refactor sitepicker screens to mvvm this master issue lists the changes we need to migrate the site picker screen to use mvvm fluxc migrate fetch supported woo version methods into a suspendable function migrate fetch site settings methods into a suspendable function migrate fetch site product settings methods into a suspendable function woo update existing methods in woo that fetch site settings product settings remove usages of since the sign out action is the only action in the site picker that is utilising event bus i thought of moving it to a separate class for now it looks like the accountstore is completely in java and migrating the sign out action to suspendable is out of scope for this task create a sitepickerrepository to communicate with fluxc to fetch site related details from the api create a viewmodel that handles business logic for the site picker screens update the fragment classes and remove legacy code
48
22,267,621,896
IssuesEvent
2022-06-10 09:02:14
woocommerce/woocommerce-ios
https://api.github.com/repos/woocommerce/woocommerce-ios
closed
Direct access to App Settings
category: architecture type: technical debt
Currently, every time we want to read/write from app settings we have to dispatch an `AppSettingsAction` and go through `AppSettingsStore`. The Flux-like model we have in Yosemite has worked well for dealing with data that is stored in Core Data, but the abstraction is making other data access harder. In the case of app settings, having to though Yosemite means that, for each new setting, we have to: 1. Create a setter action to save a new value 2. Create a getter action with a completion block because actions can't return data 3. Because of that ☝🏽, reading an app setting becomes an async operation, when the reality is that we load settings synchronously This becomes specially manifest when we are dealing with Experimental Features, which we store as app settings. For instance, [a recent PR](https://github.com/woocommerce/woocommerce-ios/pull/6954) to remove an experimental feature flag ended up removing 265 lines of code. As another example, when we started supporting IPP in Canada, both the WCPay support for Canada, and Stripe support for the US were behind a feature flag. This meant that our onboarding code had to take both flags into account (among other things), but couldn't read their values synchronously and had to get the stores manager injected, [making the code more complicated than necessary](https://github.com/woocommerce/woocommerce-ios/blob/7f592ed183523205dba8ffdbc683bff0770643ee/WooCommerce/Classes/ViewRelated/Dashboard/Settings/In-Person%20Payments/CardPresentPaymentsOnboardingUseCase.swift#L42). I think it's time to get app settings out of the stores layer and directly accessible via `ServiceLocator`
1
Direct access to App Settings - Currently, every time we want to read/write from app settings we have to dispatch an `AppSettingsAction` and go through `AppSettingsStore`. The Flux-like model we have in Yosemite has worked well for dealing with data that is stored in Core Data, but the abstraction is making other data access harder. In the case of app settings, having to though Yosemite means that, for each new setting, we have to: 1. Create a setter action to save a new value 2. Create a getter action with a completion block because actions can't return data 3. Because of that ☝🏽, reading an app setting becomes an async operation, when the reality is that we load settings synchronously This becomes specially manifest when we are dealing with Experimental Features, which we store as app settings. For instance, [a recent PR](https://github.com/woocommerce/woocommerce-ios/pull/6954) to remove an experimental feature flag ended up removing 265 lines of code. As another example, when we started supporting IPP in Canada, both the WCPay support for Canada, and Stripe support for the US were behind a feature flag. This meant that our onboarding code had to take both flags into account (among other things), but couldn't read their values synchronously and had to get the stores manager injected, [making the code more complicated than necessary](https://github.com/woocommerce/woocommerce-ios/blob/7f592ed183523205dba8ffdbc683bff0770643ee/WooCommerce/Classes/ViewRelated/Dashboard/Settings/In-Person%20Payments/CardPresentPaymentsOnboardingUseCase.swift#L42). I think it's time to get app settings out of the stores layer and directly accessible via `ServiceLocator`
architecture
direct access to app settings currently every time we want to read write from app settings we have to dispatch an appsettingsaction and go through appsettingsstore the flux like model we have in yosemite has worked well for dealing with data that is stored in core data but the abstraction is making other data access harder in the case of app settings having to though yosemite means that for each new setting we have to create a setter action to save a new value create a getter action with a completion block because actions can t return data because of that ☝🏽 reading an app setting becomes an async operation when the reality is that we load settings synchronously this becomes specially manifest when we are dealing with experimental features which we store as app settings for instance to remove an experimental feature flag ended up removing lines of code as another example when we started supporting ipp in canada both the wcpay support for canada and stripe support for the us were behind a feature flag this meant that our onboarding code had to take both flags into account among other things but couldn t read their values synchronously and had to get the stores manager injected i think it s time to get app settings out of the stores layer and directly accessible via servicelocator
49
23,189,582,559
IssuesEvent
2022-08-01 11:26:35
woocommerce/woocommerce-android
https://api.github.com/repos/woocommerce/woocommerce-android
closed
IOException: PHONE_REGISTRATION_ERROR
category: architecture type: technical debt
Sentry Issue: [WOOCOMMERCE-ANDROID-23R](https://sentry.io/organizations/a8c/issues/2574418765/?referrer=github_integration). First seen in 7.3-rc-2 on August 13, log shows "Fetching FCM registration token failed" and "FIS_AUTH_ERROR." ``` IOException: PHONE_REGISTRATION_ERROR at com.google.firebase.iid.GmsRpc.handleResponse(com.google.firebase:firebase-iid@@21.0.0:84) at com.google.firebase.iid.GmsRpc.lambda$extractResponseWhenComplete$0$GmsRpc(com.google.firebase:firebase-iid@@21.0.0:94) at com.google.firebase.iid.GmsRpc$$Lambda$0.then at com.google.android.gms.tasks.zzd.run at com.google.firebase.iid.FirebaseIidExecutors$$Lambda$0.execute ... (19 additional frame(s) were not displayed) Fetching FCM registration token failed ```
1
IOException: PHONE_REGISTRATION_ERROR - Sentry Issue: [WOOCOMMERCE-ANDROID-23R](https://sentry.io/organizations/a8c/issues/2574418765/?referrer=github_integration). First seen in 7.3-rc-2 on August 13, log shows "Fetching FCM registration token failed" and "FIS_AUTH_ERROR." ``` IOException: PHONE_REGISTRATION_ERROR at com.google.firebase.iid.GmsRpc.handleResponse(com.google.firebase:firebase-iid@@21.0.0:84) at com.google.firebase.iid.GmsRpc.lambda$extractResponseWhenComplete$0$GmsRpc(com.google.firebase:firebase-iid@@21.0.0:94) at com.google.firebase.iid.GmsRpc$$Lambda$0.then at com.google.android.gms.tasks.zzd.run at com.google.firebase.iid.FirebaseIidExecutors$$Lambda$0.execute ... (19 additional frame(s) were not displayed) Fetching FCM registration token failed ```
architecture
ioexception phone registration error sentry issue first seen in rc on august log shows fetching fcm registration token failed and fis auth error ioexception phone registration error at com google firebase iid gmsrpc handleresponse com google firebase firebase iid at com google firebase iid gmsrpc lambda extractresponsewhencomplete gmsrpc com google firebase firebase iid at com google firebase iid gmsrpc lambda then at com google android gms tasks zzd run at com google firebase iid firebaseiidexecutors lambda execute additional frame s were not displayed fetching fcm registration token failed
50
23,676,730,498
IssuesEvent
2022-08-28 07:33:10
eurofurence/ef-app_ios
https://api.github.com/repos/eurofurence/ef-app_ios
opened
Migration to SwiftUI
enhancement rearchitecture technical debt
Aim to improve the overall architecture of the app (with respect to modern Cocoa development) with the side benefit of making it easier to understand through simplification of layers. Rather than do a full rewrite (and risk losing time for next year/introducing a slew of bugs) we should do this in stages: - Port existing views to use SwiftUI, each view consuming a view model dependency. Implementations of the dependency bridge into the existing model. Leave the existing routing tier as it is, with component factories returning UIHostingController objects wrapping the SwiftUI views. - Drop abstraction layers in model around objects and expose new NSManagedObject subclasses for entities. The model package will continue to own how we create, update and fetch them. Gradually move more behaviour into these objects (as with the current model refactor). - When our minimum OS dependency hits iOS 15, directly read objects into views using SectionFetchRequest and drop the view models. All the satellite app services that use the model but do not have a visual presence in the app - e.g. notification scheduling - can continue to be driven with tests using the model. These consumers of the model can act as a sanity check for the APIs as we mov things around.
1
Migration to SwiftUI - Aim to improve the overall architecture of the app (with respect to modern Cocoa development) with the side benefit of making it easier to understand through simplification of layers. Rather than do a full rewrite (and risk losing time for next year/introducing a slew of bugs) we should do this in stages: - Port existing views to use SwiftUI, each view consuming a view model dependency. Implementations of the dependency bridge into the existing model. Leave the existing routing tier as it is, with component factories returning UIHostingController objects wrapping the SwiftUI views. - Drop abstraction layers in model around objects and expose new NSManagedObject subclasses for entities. The model package will continue to own how we create, update and fetch them. Gradually move more behaviour into these objects (as with the current model refactor). - When our minimum OS dependency hits iOS 15, directly read objects into views using SectionFetchRequest and drop the view models. All the satellite app services that use the model but do not have a visual presence in the app - e.g. notification scheduling - can continue to be driven with tests using the model. These consumers of the model can act as a sanity check for the APIs as we mov things around.
architecture
migration to swiftui aim to improve the overall architecture of the app with respect to modern cocoa development with the side benefit of making it easier to understand through simplification of layers rather than do a full rewrite and risk losing time for next year introducing a slew of bugs we should do this in stages port existing views to use swiftui each view consuming a view model dependency implementations of the dependency bridge into the existing model leave the existing routing tier as it is with component factories returning uihostingcontroller objects wrapping the swiftui views drop abstraction layers in model around objects and expose new nsmanagedobject subclasses for entities the model package will continue to own how we create update and fetch them gradually move more behaviour into these objects as with the current model refactor when our minimum os dependency hits ios directly read objects into views using sectionfetchrequest and drop the view models all the satellite app services that use the model but do not have a visual presence in the app e g notification scheduling can continue to be driven with tests using the model these consumers of the model can act as a sanity check for the apis as we mov things around
51
23,754,645,409
IssuesEvent
2022-09-01 01:04:28
meltano/sdk
https://api.github.com/repos/meltano/sdk
closed
Add reference paginator implementations
architecture decision kind/Tech Debt valuestream/SDK migrated from gitlab
Migrated from GitLab: https://gitlab.com/meltano/sdk/-/issues/318 Originally created by @edgarrmondragon on 2022-01-29 01:34:13 --- ## Summary [//]: # (Concisely summarize the feature you are proposing.) Add reference and common pagination implementations in a similar fashion to _authenticators_. ## Proposed benefits [//]: # (Concisely summarize the benefits this feature would bring to yourself and other users.) There is a limited number of pagination standards, which may only differ in minute details, much like authentication headers (`Authorization: Bearer <token>`, `Authorization: Token <>`), so having a sensible set of pre-built implementations may simplify things further for users by allowing them to pick one off-the-shelf. Another benefit is that having dedicated pagination classes makes unit-testing them much easier. Yet another benefit of moving to a dedicated class for pagination, is that the paginator state doesn't need to be limited to the previous value but can include arbitrary attributes, like the last-seen record (https://gitlab.com/meltano/sdk/-/issues/124+). ## Proposal details [//]: # (In as much detail as you are able, describe the feature you'd like to build or would like to see built.) I have a reference implementation [here](https://github.com/edgarrmondragon/tap-readthedocs/pull/13/files). **TL;DR** ```python class APIPaginator: """An API paginator object.""" @property def current_value(self) -> TPageToken: """Get the current pagination value.""" ... @property def finished(self) -> bool: """Get a flag that indicates if the last page of data has been reached.""" ... @property def count(self) -> int: """Count the number of pages traversed so far.""" ... def advance(self, response: Response) -> None: """Get a new page value and advance the current one.""" ... def has_more(self, response: Response) -> bool: """Override this method to check if the endpoint has any pages left.""" ... @abstractmethod def get_next(self, response: Response) -> Optional[TPageToken]: """Get the next pagination token or index from the API response.""" ... ``` ## Best reasons not to build [//]: # (Will this negatively affect any existing functionality? Do you anticipate any breaking changes versus what may already be working today? Make the counter-argument to your proposal here.) Can't think of any. The current `RESTStream.get_next_page_token` can be slowly deprecated with the introduction of a paginator that wraps the stream (as in [tap-readthedocs/client.py at 09dca8c653cd73e51ce265e239c94c68479481b1 Β· edgarrmondragon/tap-readthedocs Β· GitHub](https://github.com/edgarrmondragon/tap-readthedocs/blob/09dca8c653cd73e51ce265e239c94c68479481b1/tap_readthedocs/client.py#L17-L48)).
1
Add reference paginator implementations - Migrated from GitLab: https://gitlab.com/meltano/sdk/-/issues/318 Originally created by @edgarrmondragon on 2022-01-29 01:34:13 --- ## Summary [//]: # (Concisely summarize the feature you are proposing.) Add reference and common pagination implementations in a similar fashion to _authenticators_. ## Proposed benefits [//]: # (Concisely summarize the benefits this feature would bring to yourself and other users.) There is a limited number of pagination standards, which may only differ in minute details, much like authentication headers (`Authorization: Bearer <token>`, `Authorization: Token <>`), so having a sensible set of pre-built implementations may simplify things further for users by allowing them to pick one off-the-shelf. Another benefit is that having dedicated pagination classes makes unit-testing them much easier. Yet another benefit of moving to a dedicated class for pagination, is that the paginator state doesn't need to be limited to the previous value but can include arbitrary attributes, like the last-seen record (https://gitlab.com/meltano/sdk/-/issues/124+). ## Proposal details [//]: # (In as much detail as you are able, describe the feature you'd like to build or would like to see built.) I have a reference implementation [here](https://github.com/edgarrmondragon/tap-readthedocs/pull/13/files). **TL;DR** ```python class APIPaginator: """An API paginator object.""" @property def current_value(self) -> TPageToken: """Get the current pagination value.""" ... @property def finished(self) -> bool: """Get a flag that indicates if the last page of data has been reached.""" ... @property def count(self) -> int: """Count the number of pages traversed so far.""" ... def advance(self, response: Response) -> None: """Get a new page value and advance the current one.""" ... def has_more(self, response: Response) -> bool: """Override this method to check if the endpoint has any pages left.""" ... @abstractmethod def get_next(self, response: Response) -> Optional[TPageToken]: """Get the next pagination token or index from the API response.""" ... ``` ## Best reasons not to build [//]: # (Will this negatively affect any existing functionality? Do you anticipate any breaking changes versus what may already be working today? Make the counter-argument to your proposal here.) Can't think of any. The current `RESTStream.get_next_page_token` can be slowly deprecated with the introduction of a paginator that wraps the stream (as in [tap-readthedocs/client.py at 09dca8c653cd73e51ce265e239c94c68479481b1 Β· edgarrmondragon/tap-readthedocs Β· GitHub](https://github.com/edgarrmondragon/tap-readthedocs/blob/09dca8c653cd73e51ce265e239c94c68479481b1/tap_readthedocs/client.py#L17-L48)).
architecture
add reference paginator implementations migrated from gitlab originally created by edgarrmondragon on summary concisely summarize the feature you are proposing add reference and common pagination implementations in a similar fashion to authenticators proposed benefits concisely summarize the benefits this feature would bring to yourself and other users there is a limited number of pagination standards which may only differ in minute details much like authentication headers authorization bearer authorization token so having a sensible set of pre built implementations may simplify things further for users by allowing them to pick one off the shelf another benefit is that having dedicated pagination classes makes unit testing them much easier yet another benefit of moving to a dedicated class for pagination is that the paginator state doesn t need to be limited to the previous value but can include arbitrary attributes like the last seen record proposal details in as much detail as you are able describe the feature you d like to build or would like to see built i have a reference implementation tl dr python class apipaginator an api paginator object property def current value self tpagetoken get the current pagination value property def finished self bool get a flag that indicates if the last page of data has been reached property def count self int count the number of pages traversed so far def advance self response response none get a new page value and advance the current one def has more self response response bool override this method to check if the endpoint has any pages left abstractmethod def get next self response response optional get the next pagination token or index from the api response best reasons not to build will this negatively affect any existing functionality do you anticipate any breaking changes versus what may already be working today make the counter argument to your proposal here can t think of any the current reststream get next page token can be slowly deprecated with the introduction of a paginator that wraps the stream as in
52
24,159,351,708
IssuesEvent
2022-09-22 10:17:16
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
opened
Improve readability of column moving logic
type: enhancement affects: architecture affects: technical debt status: triage
## Problem Our column moving logic is difficult to get into. There's a related bug I've encountered and am of the opinion that a focused effort to improve column moving logic's readability is warranted. ## Proposed solution @silentninja proposed that we do this together on a call. ## Additional context @silentninja and @mathemancer seem to have insight into that logic.
1
Improve readability of column moving logic - ## Problem Our column moving logic is difficult to get into. There's a related bug I've encountered and am of the opinion that a focused effort to improve column moving logic's readability is warranted. ## Proposed solution @silentninja proposed that we do this together on a call. ## Additional context @silentninja and @mathemancer seem to have insight into that logic.
architecture
improve readability of column moving logic problem our column moving logic is difficult to get into there s a related bug i ve encountered and am of the opinion that a focused effort to improve column moving logic s readability is warranted proposed solution silentninja proposed that we do this together on a call additional context silentninja and mathemancer seem to have insight into that logic
53
24,443,675,959
IssuesEvent
2022-10-06 16:11:34
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Optimize MetaData use
type: enhancement affects: architecture affects: technical debt work: backend work: database restricted: maintainers status: started
### Observation - Metadata reflection by SqlAlchemy results in 10 query each time a new metadata object is created/reflected, - and, most of the methods don't reuse metadata, - currently loading the table page requires ~1600 queries to the database; - `mathesar.reflection.reflect_db_objects`: - reflecting each database object in our model (tables, schemas, databases, columns, constraints) adds 12 query for each object as it uses a new MetaData object instead of reusing the existing MetaData; - `db.columns.operations.select.get_columns_name_from_attnums`: - fetching column name is needed for accessing the SA column object and each call to fetch the column name results in a metadata reflection call. ### Cause - `db` module methods not reusing metadata. ### Solution - Use a single `MetaData` instance; - Possibly use a multi-request `MetaData` cache; - Possibilities - Use session-level cache; - Django cache; - Use multi-session cache; - Maybe file or db to store pickled metadata object; - How to keep cache's validity up to date? - Maybe invalidate it when doing mutating operations.
1
Optimize MetaData use - ### Observation - Metadata reflection by SqlAlchemy results in 10 query each time a new metadata object is created/reflected, - and, most of the methods don't reuse metadata, - currently loading the table page requires ~1600 queries to the database; - `mathesar.reflection.reflect_db_objects`: - reflecting each database object in our model (tables, schemas, databases, columns, constraints) adds 12 query for each object as it uses a new MetaData object instead of reusing the existing MetaData; - `db.columns.operations.select.get_columns_name_from_attnums`: - fetching column name is needed for accessing the SA column object and each call to fetch the column name results in a metadata reflection call. ### Cause - `db` module methods not reusing metadata. ### Solution - Use a single `MetaData` instance; - Possibly use a multi-request `MetaData` cache; - Possibilities - Use session-level cache; - Django cache; - Use multi-session cache; - Maybe file or db to store pickled metadata object; - How to keep cache's validity up to date? - Maybe invalidate it when doing mutating operations.
architecture
optimize metadata use observation metadata reflection by sqlalchemy results in query each time a new metadata object is created reflected and most of the methods don t reuse metadata currently loading the table page requires queries to the database mathesar reflection reflect db objects reflecting each database object in our model tables schemas databases columns constraints adds query for each object as it uses a new metadata object instead of reusing the existing metadata db columns operations select get columns name from attnums fetching column name is needed for accessing the sa column object and each call to fetch the column name results in a metadata reflection call cause db module methods not reusing metadata solution use a single metadata instance possibly use a multi request metadata cache possibilities use session level cache django cache use multi session cache maybe file or db to store pickled metadata object how to keep cache s validity up to date maybe invalidate it when doing mutating operations
54
24,464,940,430
IssuesEvent
2022-10-07 14:17:01
woocommerce/woocommerce-ios
https://api.github.com/repos/woocommerce/woocommerce-ios
closed
Make experimental features easier to add and remove
category: architecture type: technical debt
As mentioned in #7011, adding or removing a toggle for experimental features takes a [bigger code change than one might expect](https://github.com/woocommerce/woocommerce-ios/pull/6954). Ideally we'd want these changes to be leaner, just like [adding or removing a feature flag](https://github.com/woocommerce/woocommerce-ios/pull/6995). Part of this would be solved by #7011, but we can also improve the Experimental Features screen, making it easier and more automatic to add new flags there.
1
Make experimental features easier to add and remove - As mentioned in #7011, adding or removing a toggle for experimental features takes a [bigger code change than one might expect](https://github.com/woocommerce/woocommerce-ios/pull/6954). Ideally we'd want these changes to be leaner, just like [adding or removing a feature flag](https://github.com/woocommerce/woocommerce-ios/pull/6995). Part of this would be solved by #7011, but we can also improve the Experimental Features screen, making it easier and more automatic to add new flags there.
architecture
make experimental features easier to add and remove as mentioned in adding or removing a toggle for experimental features takes a ideally we d want these changes to be leaner just like part of this would be solved by but we can also improve the experimental features screen making it easier and more automatic to add new flags there
55
25,474,118,939
IssuesEvent
2022-11-25 12:54:23
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
opened
Fix filter/hint system so that it doesn't have to be stubbed
type: enhancement affects: dx affects: architecture affects: technical debt work: backend status: draft
We're currently hardcoding on the frontend what filters are available and/or how they can be used. We designed the hint system for that, but there was some problem with that, so we ended up hardcoding, and postponed fixing the actual problem. I don't recall what the actual problem was. @pavish do you have insights?
1
Fix filter/hint system so that it doesn't have to be stubbed - We're currently hardcoding on the frontend what filters are available and/or how they can be used. We designed the hint system for that, but there was some problem with that, so we ended up hardcoding, and postponed fixing the actual problem. I don't recall what the actual problem was. @pavish do you have insights?
architecture
fix filter hint system so that it doesn t have to be stubbed we re currently hardcoding on the frontend what filters are available and or how they can be used we designed the hint system for that but there was some problem with that so we ended up hardcoding and postponed fixing the actual problem i don t recall what the actual problem was pavish do you have insights
56
25,481,399,070
IssuesEvent
2022-11-25 21:47:10
contribute-design/contribute.design
https://api.github.com/repos/contribute-design/contribute.design
opened
Refactor CloudFlare workers
πŸ— architecture πŸ— debt
**Why should it be implemented?** Our current worker implementation is extremely immature... We're basically commit the workers to the monorepo but deploy them manually to cloudflare **Describe the solution** - Ideally we'd use lerna or something similar to manage each worker as an own package - Each worker would be written in typescript - Workers should be able to share code between each other - Building and deployment should happen automatically via a GH action or similar **Additional context** There's a nice boilerplate over here: https://github.com/cmackenzie1/holster
1
Refactor CloudFlare workers - **Why should it be implemented?** Our current worker implementation is extremely immature... We're basically commit the workers to the monorepo but deploy them manually to cloudflare **Describe the solution** - Ideally we'd use lerna or something similar to manage each worker as an own package - Each worker would be written in typescript - Workers should be able to share code between each other - Building and deployment should happen automatically via a GH action or similar **Additional context** There's a nice boilerplate over here: https://github.com/cmackenzie1/holster
architecture
refactor cloudflare workers why should it be implemented our current worker implementation is extremely immature we re basically commit the workers to the monorepo but deploy them manually to cloudflare describe the solution ideally we d use lerna or something similar to manage each worker as an own package each worker would be written in typescript workers should be able to share code between each other building and deployment should happen automatically via a gh action or similar additional context there s a nice boilerplate over here
59
26,865,654,971
IssuesEvent
2023-02-03 23:17:45
FuelLabs/fuel-core
https://api.github.com/repos/FuelLabs/fuel-core
opened
Test suite is too slow
tech-debt fuel-core architecture
Our full test sweet is taking too long to run. Especially the p2p tests. This slows down our development cycle and leads to avoiding running the entire suite.
1
Test suite is too slow - Our full test sweet is taking too long to run. Especially the p2p tests. This slows down our development cycle and leads to avoiding running the entire suite.
architecture
test suite is too slow our full test sweet is taking too long to run especially the tests this slows down our development cycle and leads to avoiding running the entire suite
60
30,629,687,218
IssuesEvent
2023-07-24 13:49:23
woocommerce/woocommerce-ios
https://api.github.com/repos/woocommerce/woocommerce-ios
opened
Make WebKit User Agent run on main thread explicitely
type: enhancement category: architecture type: technical debt
Opening this issue to keep an eye on this potential problem and attempt improvements in the future: At the moment [our implementation webkit user agent](https://github.com/woocommerce/woocommerce-ios/blob/7c6ecfeefb7c3af6bd6fa328efe3b0aa787e1d6f/Networking/Networking/Settings/UserAgent.swift#L15) (in order to setup, for example, authenticated wpcom requests) relies on the action dispatcher to assure that any request will run on the main thread, however, this is not enforced unless we actually call the remote that create and enqueues these requests through an action. This means that while works as a side-effect (we happen to call these remotes via actions in the dispatcher) we risk a runtime crash when any remote that involves a webkit user agent instantiation is not ran through the dispatcher, since there's no compiler-check that prevents us from doing so. One potential solution could be to mark the method as `@MainActor `, but this propagates an error through networking since we would have a main-actor static property attempted to be used across multiple non-main-actor contexts. Another option could be to wrap it in a call to DispatchQueue.main.async. Ref: p1690199791835769-slack-C03L1NF1EA3
1
Make WebKit User Agent run on main thread explicitely - Opening this issue to keep an eye on this potential problem and attempt improvements in the future: At the moment [our implementation webkit user agent](https://github.com/woocommerce/woocommerce-ios/blob/7c6ecfeefb7c3af6bd6fa328efe3b0aa787e1d6f/Networking/Networking/Settings/UserAgent.swift#L15) (in order to setup, for example, authenticated wpcom requests) relies on the action dispatcher to assure that any request will run on the main thread, however, this is not enforced unless we actually call the remote that create and enqueues these requests through an action. This means that while works as a side-effect (we happen to call these remotes via actions in the dispatcher) we risk a runtime crash when any remote that involves a webkit user agent instantiation is not ran through the dispatcher, since there's no compiler-check that prevents us from doing so. One potential solution could be to mark the method as `@MainActor `, but this propagates an error through networking since we would have a main-actor static property attempted to be used across multiple non-main-actor contexts. Another option could be to wrap it in a call to DispatchQueue.main.async. Ref: p1690199791835769-slack-C03L1NF1EA3
architecture
make webkit user agent run on main thread explicitely opening this issue to keep an eye on this potential problem and attempt improvements in the future at the moment in order to setup for example authenticated wpcom requests relies on the action dispatcher to assure that any request will run on the main thread however this is not enforced unless we actually call the remote that create and enqueues these requests through an action this means that while works as a side effect we happen to call these remotes via actions in the dispatcher we risk a runtime crash when any remote that involves a webkit user agent instantiation is not ran through the dispatcher since there s no compiler check that prevents us from doing so one potential solution could be to mark the method as mainactor but this propagates an error through networking since we would have a main actor static property attempted to be used across multiple non main actor contexts another option could be to wrap it in a call to dispatchqueue main async ref slack
61
30,926,869,098
IssuesEvent
2023-08-06 15:37:16
spacemeshos/go-spacemesh
https://api.github.com/repos/spacemeshos/go-spacemesh
closed
Manage running goroutines and graceful shutdown
technical debt devex concurrency architecture
## Motivation Graceful shutdown is proving to be a challenge when multiple goroutines are running in the background. This has two parts: 1. Signaling a shutdown and having each service respond by terminating gracefully (this part is working, but each service has its own, slightly different, implementation). 2. being able to know when all services have completed. This is important in production, where we want to ensure no data is lost due to a dirty shutdown, and in tests where we want to terminate quickly and cleanly without having to add unnecessary waiting periods. ## Method We want to integrate [Tomb](https://pkg.go.dev/gopkg.in/tomb.v2?tab=doc), a package for handling clean goroutine tracking and termination. Tomb provides a `Go()` method that's intended to replace calling the `go` keyword directly to start goroutines. Internally, it uses a waitgroup to track how many goroutines have been started and how many have completed. It can also provide the first error that triggered a shutdown, if it was due to an error. Tomb should be integrated in a single module first and then additional PRs can integrate it into more modules. Eventually: - No goroutine should be started using a bare `go` keyword. - All method calls that can accept a `Context` should receive one, provided by Tomb. - All `select` statements should have an early termination clause using `tomb.Dying()` (a method returning a channel that's closed when the Tomb is killed). If this clause is invoked, an `ErrDying` (a Tomb constant) should be returned by the goroutine and Tomb knows to ignore it as a kill reason. - All received context objects (specifically in api handlers) should be wrapped using `tomb.Context(ctx)` if used (I think they're never used as of writing this). - New contexts should never be generated from scratch and we should always use the Tomb for context (this happens mostly in the P2P module, but also in the PoET client). ## WIP - [ ] Add specific tasks (where to initialize the Tomb, how to do shutdown) - [ ] List some candidate modules for the first integration
1
Manage running goroutines and graceful shutdown - ## Motivation Graceful shutdown is proving to be a challenge when multiple goroutines are running in the background. This has two parts: 1. Signaling a shutdown and having each service respond by terminating gracefully (this part is working, but each service has its own, slightly different, implementation). 2. being able to know when all services have completed. This is important in production, where we want to ensure no data is lost due to a dirty shutdown, and in tests where we want to terminate quickly and cleanly without having to add unnecessary waiting periods. ## Method We want to integrate [Tomb](https://pkg.go.dev/gopkg.in/tomb.v2?tab=doc), a package for handling clean goroutine tracking and termination. Tomb provides a `Go()` method that's intended to replace calling the `go` keyword directly to start goroutines. Internally, it uses a waitgroup to track how many goroutines have been started and how many have completed. It can also provide the first error that triggered a shutdown, if it was due to an error. Tomb should be integrated in a single module first and then additional PRs can integrate it into more modules. Eventually: - No goroutine should be started using a bare `go` keyword. - All method calls that can accept a `Context` should receive one, provided by Tomb. - All `select` statements should have an early termination clause using `tomb.Dying()` (a method returning a channel that's closed when the Tomb is killed). If this clause is invoked, an `ErrDying` (a Tomb constant) should be returned by the goroutine and Tomb knows to ignore it as a kill reason. - All received context objects (specifically in api handlers) should be wrapped using `tomb.Context(ctx)` if used (I think they're never used as of writing this). - New contexts should never be generated from scratch and we should always use the Tomb for context (this happens mostly in the P2P module, but also in the PoET client). ## WIP - [ ] Add specific tasks (where to initialize the Tomb, how to do shutdown) - [ ] List some candidate modules for the first integration
architecture
manage running goroutines and graceful shutdown motivation graceful shutdown is proving to be a challenge when multiple goroutines are running in the background this has two parts signaling a shutdown and having each service respond by terminating gracefully this part is working but each service has its own slightly different implementation being able to know when all services have completed this is important in production where we want to ensure no data is lost due to a dirty shutdown and in tests where we want to terminate quickly and cleanly without having to add unnecessary waiting periods method we want to integrate a package for handling clean goroutine tracking and termination tomb provides a go method that s intended to replace calling the go keyword directly to start goroutines internally it uses a waitgroup to track how many goroutines have been started and how many have completed it can also provide the first error that triggered a shutdown if it was due to an error tomb should be integrated in a single module first and then additional prs can integrate it into more modules eventually no goroutine should be started using a bare go keyword all method calls that can accept a context should receive one provided by tomb all select statements should have an early termination clause using tomb dying a method returning a channel that s closed when the tomb is killed if this clause is invoked an errdying a tomb constant should be returned by the goroutine and tomb knows to ignore it as a kill reason all received context objects specifically in api handlers should be wrapped using tomb context ctx if used i think they re never used as of writing this new contexts should never be generated from scratch and we should always use the tomb for context this happens mostly in the module but also in the poet client wip add specific tasks where to initialize the tomb how to do shutdown list some candidate modules for the first integration
62
31,168,179,526
IssuesEvent
2023-08-16 21:39:17
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
`PATCH` requests to the Table API should support changing the table's name and columns at the same time
type: enhancement affects: architecture affects: technical debt work: backend work: database status: blocked
## Problem <!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.--> The current implementation of the table API does not allow you to update both the `name` and `columns` at the same time. This is counterintuitive and it would be ideal if you could update both at once. ## Proposed solution <!-- A clear and concise description of your proposed solution or feature. --> We need to figure out how to update the name first, use the updated table name in the column-related changes and roll the whole thing back if any of the operations fail (including the name change). ## Additional context <!-- Add any other context or screenshots about the feature request here.--> - See conversation on #562 - We should do #592 first since this involves more single-transaction operations. Marking this issue as blocked by it. - [Postgres wiki page on transactional DDL](https://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis)
1
`PATCH` requests to the Table API should support changing the table's name and columns at the same time - ## Problem <!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.--> The current implementation of the table API does not allow you to update both the `name` and `columns` at the same time. This is counterintuitive and it would be ideal if you could update both at once. ## Proposed solution <!-- A clear and concise description of your proposed solution or feature. --> We need to figure out how to update the name first, use the updated table name in the column-related changes and roll the whole thing back if any of the operations fail (including the name change). ## Additional context <!-- Add any other context or screenshots about the feature request here.--> - See conversation on #562 - We should do #592 first since this involves more single-transaction operations. Marking this issue as blocked by it. - [Postgres wiki page on transactional DDL](https://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis)
architecture
patch requests to the table api should support changing the table s name and columns at the same time problem the current implementation of the table api does not allow you to update both the name and columns at the same time this is counterintuitive and it would be ideal if you could update both at once proposed solution we need to figure out how to update the name first use the updated table name in the column related changes and roll the whole thing back if any of the operations fail including the name change additional context see conversation on we should do first since this involves more single transaction operations marking this issue as blocked by it
0
2,490,609,947
IssuesEvent
2015-01-02 17:29:17
FineUploader/fine-uploader
https://api.github.com/repos/FineUploader/fine-uploader
closed
5 - Get build script under control
5 - Done build technical debt
Our grunt script is out of control. It's currently just a mess of copy and pasted code with countless instances of code repetition. Furthermore, the majority of the logic resides in one single coffeescript file that is approaching 1000 lines. We'll have to do something(s) to rein in this mess. <!--- @huboard:{"order":1145.0,"custom_state":""} -->
1
5 - Get build script under control - Our grunt script is out of control. It's currently just a mess of copy and pasted code with countless instances of code repetition. Furthermore, the majority of the logic resides in one single coffeescript file that is approaching 1000 lines. We'll have to do something(s) to rein in this mess. <!--- @huboard:{"order":1145.0,"custom_state":""} -->
build
get build script under control our grunt script is out of control it s currently just a mess of copy and pasted code with countless instances of code repetition furthermore the majority of the logic resides in one single coffeescript file that is approaching lines we ll have to do something s to rein in this mess huboard order custom state
1
3,202,827,471
IssuesEvent
2015-10-02 15:50:48
openshift/origin
https://api.github.com/repos/openshift/origin
closed
Unify build strategy fields
area/techdebt area/usability component/build priority/P3
Currently we're starting to copy&paste fields between different build strategies. So far there's almost two of them that are shared between strategies: `Image` and `Env` (although the latter isn't present in `DockerBuildStrategy`). Additionally when @mfojtik PR https://github.com/openshift/origin/pull/1411 lands there'll be `DockerRegistrySecretRef`. We should move those fields into `BuildStrategy`.
1
Unify build strategy fields - Currently we're starting to copy&paste fields between different build strategies. So far there's almost two of them that are shared between strategies: `Image` and `Env` (although the latter isn't present in `DockerBuildStrategy`). Additionally when @mfojtik PR https://github.com/openshift/origin/pull/1411 lands there'll be `DockerRegistrySecretRef`. We should move those fields into `BuildStrategy`.
build
unify build strategy fields currently we re starting to copy paste fields between different build strategies so far there s almost two of them that are shared between strategies image and env although the latter isn t present in dockerbuildstrategy additionally when mfojtik pr lands there ll be dockerregistrysecretref we should move those fields into buildstrategy
3
3,642,151,696
IssuesEvent
2016-02-14 04:26:15
cortoproject/corto
https://api.github.com/repos/cortoproject/corto
opened
Revise and simplify buildsystem
Corto:BuildSystem Corto:PackageManagement Corto:TechnicalDebt Corto:Usability
With #450 implemented, the buildsystem merged components with packages, and generators with libraries. To further simplify the buildsystem, libraries can be merged with packages as well. Packages are more expressive, allow for better organization of libraries and prevent nameclashes. Generators are currently stored in the `lib/corto/<version>/libraries` folder. Generators will become packages as well, and shall be stored in `corto/gen/<binding>/<name>`. For example: `corto/gen/c/api`, or `corto/gen/doc/html`. Since these packages have no definition file, the current implementation of the buildsystem requires a user to specify `NOCORTO`. However, that disables linking with Corto, and also disables automatic management of dependencies & include files- something that was previously done by components. This existing functionality must be consolidated with the new design. A proposal: Create a package with a definition file `foo.cx`. For each interface in the definition file a managed implementation file will be generated, along with header files. Include dependencies are managed. ``` corto create package Foo ``` Create a package with automatic dependency (include file) management. Minimal code generation will be required. The package will have a managed `include/Foo.h` header file. The package will link with Corto. The buildsystem will detect that no definition file is available, and will therefore switch to limited code generation. ``` corto create package Foo --empty ``` Create a package for which no code is generated, and which does not link with Corto. This is useful for wrapping 3rd party libraries or for projects where only the Corto buildsystem is required. The generated rakefile will contain the line `NOCORTO = true` which signals the build system to not generate any code. ``` corto create package Foo --nocorto ``` Additionally, a `--local` flag can be provided which will ensure that the package is not installed to an environment (either local or global). This is for example useful for test suites.
1
Revise and simplify buildsystem - With #450 implemented, the buildsystem merged components with packages, and generators with libraries. To further simplify the buildsystem, libraries can be merged with packages as well. Packages are more expressive, allow for better organization of libraries and prevent nameclashes. Generators are currently stored in the `lib/corto/<version>/libraries` folder. Generators will become packages as well, and shall be stored in `corto/gen/<binding>/<name>`. For example: `corto/gen/c/api`, or `corto/gen/doc/html`. Since these packages have no definition file, the current implementation of the buildsystem requires a user to specify `NOCORTO`. However, that disables linking with Corto, and also disables automatic management of dependencies & include files- something that was previously done by components. This existing functionality must be consolidated with the new design. A proposal: Create a package with a definition file `foo.cx`. For each interface in the definition file a managed implementation file will be generated, along with header files. Include dependencies are managed. ``` corto create package Foo ``` Create a package with automatic dependency (include file) management. Minimal code generation will be required. The package will have a managed `include/Foo.h` header file. The package will link with Corto. The buildsystem will detect that no definition file is available, and will therefore switch to limited code generation. ``` corto create package Foo --empty ``` Create a package for which no code is generated, and which does not link with Corto. This is useful for wrapping 3rd party libraries or for projects where only the Corto buildsystem is required. The generated rakefile will contain the line `NOCORTO = true` which signals the build system to not generate any code. ``` corto create package Foo --nocorto ``` Additionally, a `--local` flag can be provided which will ensure that the package is not installed to an environment (either local or global). This is for example useful for test suites.
build
revise and simplify buildsystem with implemented the buildsystem merged components with packages and generators with libraries to further simplify the buildsystem libraries can be merged with packages as well packages are more expressive allow for better organization of libraries and prevent nameclashes generators are currently stored in the lib corto libraries folder generators will become packages as well and shall be stored in corto gen for example corto gen c api or corto gen doc html since these packages have no definition file the current implementation of the buildsystem requires a user to specify nocorto however that disables linking with corto and also disables automatic management of dependencies include files something that was previously done by components this existing functionality must be consolidated with the new design a proposal create a package with a definition file foo cx for each interface in the definition file a managed implementation file will be generated along with header files include dependencies are managed corto create package foo create a package with automatic dependency include file management minimal code generation will be required the package will have a managed include foo h header file the package will link with corto the buildsystem will detect that no definition file is available and will therefore switch to limited code generation corto create package foo empty create a package for which no code is generated and which does not link with corto this is useful for wrapping party libraries or for projects where only the corto buildsystem is required the generated rakefile will contain the line nocorto true which signals the build system to not generate any code corto create package foo nocorto additionally a local flag can be provided which will ensure that the package is not installed to an environment either local or global this is for example useful for test suites
4
3,799,420,730
IssuesEvent
2016-03-23 15:51:36
mesosphere/marathon
https://api.github.com/repos/mesosphere/marathon
opened
Resident Tasks: Flaky test: persistent volume will be re-attached and keep state
build debt Epic-217
``` [14:24:16][Step 3/3] - persistent volume will be re-attached and keep state *** FAILED *** (30 seconds, 378 milliseconds) [14:24:16][Step 3/3] java.lang.AssertionError: Waiting for event deployment_success to arrive took longer than 30 seconds. Give up. [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.WaitTestSupport$.next$1(WaitTestSupport.scala:30) [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.WaitTestSupport$.waitFor(WaitTestSupport.scala:36) [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.MarathonCallbackTestSupport$class.waitForEventMatching(MarathonCallbackTestSupport.scala:48) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.waitForEventMatching(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.MarathonCallbackTestSupport$class.waitForEventWith(MarathonCallbackTestSupport.scala:52) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.waitForEventWith(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.MarathonCallbackTestSupport$class.waitForEvent(MarathonCallbackTestSupport.scala:32) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.waitForEvent(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$3.apply(ResidentTaskIntegrationTest.scala:56) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$3.apply(ResidentTaskIntegrationTest.scala:44) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$test$1.apply$mcV$sp(ResidentTaskIntegrationTest.scala:225) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$test$1.apply(ResidentTaskIntegrationTest.scala:225) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$test$1.apply(ResidentTaskIntegrationTest.scala:225) [14:24:16][Step 3/3] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22) [14:24:16][Step 3/3] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85) [14:24:16][Step 3/3] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) [14:24:16][Step 3/3] at org.scalatest.Transformer.apply(Transformer.scala:22) [14:24:16][Step 3/3] at org.scalatest.Transformer.apply(Transformer.scala:20) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:158) [14:24:16][Step 3/3] at org.scalatest.Suite$class.withFixture(Suite.scala:1121) [14:24:16][Step 3/3] at org.scalatest.FunSuite.withFixture(FunSuite.scala:1559) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:155) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:167) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:167) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:167) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.org$scalatest$BeforeAndAfter$$super$runTest(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.runTest(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:200) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:200) [14:24:16][Step 3/3] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413) [14:24:16][Step 3/3] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401) [14:24:16][Step 3/3] at scala.collection.immutable.List.foreach(List.scala:381) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:200) [14:24:16][Step 3/3] at org.scalatest.FunSuite.runTests(FunSuite.scala:1559) [14:24:16][Step 3/3] at org.scalatest.Suite$class.run(Suite.scala:1423) [14:24:16][Step 3/3] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1559) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:204) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:204) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.runImpl(Engine.scala:545) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:204) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.org$scalatest$BeforeAndAfterAllConfigMap$$super$run(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.BeforeAndAfterAllConfigMap$class.liftedTree1$1(BeforeAndAfterAllConfigMap.scala:248) [14:24:16][Step 3/3] at org.scalatest.BeforeAndAfterAllConfigMap$class.run(BeforeAndAfterAllConfigMap.scala:247) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.org$scalatest$BeforeAndAfter$$super$run(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.run(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:444) [14:24:16][Step 3/3] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:651) [14:24:16][Step 3/3] at sbt.ForkMain$Run$2.call(ForkMain.java:294) [14:24:16][Step 3/3] at sbt.ForkMain$Run$2.call(ForkMain.java:284) [14:24:16][Step 3/3] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [14:24:16][Step 3/3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [14:24:16][Step 3/3] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [14:24:16][Step 3/3] at java.lang.Thread.run(Thread.java:745) [14:24:16][Step 3/3] + Given An app that writes into a persistent volume [14:24:16][Step 3/3] + When a task is launched [14:24:16][Step 3/3] + Then it successfully writes to the persistent volume and then finishes ```
1
Resident Tasks: Flaky test: persistent volume will be re-attached and keep state - ``` [14:24:16][Step 3/3] - persistent volume will be re-attached and keep state *** FAILED *** (30 seconds, 378 milliseconds) [14:24:16][Step 3/3] java.lang.AssertionError: Waiting for event deployment_success to arrive took longer than 30 seconds. Give up. [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.WaitTestSupport$.next$1(WaitTestSupport.scala:30) [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.WaitTestSupport$.waitFor(WaitTestSupport.scala:36) [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.MarathonCallbackTestSupport$class.waitForEventMatching(MarathonCallbackTestSupport.scala:48) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.waitForEventMatching(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.MarathonCallbackTestSupport$class.waitForEventWith(MarathonCallbackTestSupport.scala:52) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.waitForEventWith(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at mesosphere.marathon.integration.setup.MarathonCallbackTestSupport$class.waitForEvent(MarathonCallbackTestSupport.scala:32) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.waitForEvent(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$3.apply(ResidentTaskIntegrationTest.scala:56) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$3.apply(ResidentTaskIntegrationTest.scala:44) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$test$1.apply$mcV$sp(ResidentTaskIntegrationTest.scala:225) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$test$1.apply(ResidentTaskIntegrationTest.scala:225) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest$$anonfun$test$1.apply(ResidentTaskIntegrationTest.scala:225) [14:24:16][Step 3/3] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22) [14:24:16][Step 3/3] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85) [14:24:16][Step 3/3] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) [14:24:16][Step 3/3] at org.scalatest.Transformer.apply(Transformer.scala:22) [14:24:16][Step 3/3] at org.scalatest.Transformer.apply(Transformer.scala:20) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:158) [14:24:16][Step 3/3] at org.scalatest.Suite$class.withFixture(Suite.scala:1121) [14:24:16][Step 3/3] at org.scalatest.FunSuite.withFixture(FunSuite.scala:1559) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:155) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:167) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:167) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:167) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.org$scalatest$BeforeAndAfter$$super$runTest(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.runTest(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:200) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:200) [14:24:16][Step 3/3] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413) [14:24:16][Step 3/3] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401) [14:24:16][Step 3/3] at scala.collection.immutable.List.foreach(List.scala:381) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:200) [14:24:16][Step 3/3] at org.scalatest.FunSuite.runTests(FunSuite.scala:1559) [14:24:16][Step 3/3] at org.scalatest.Suite$class.run(Suite.scala:1423) [14:24:16][Step 3/3] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1559) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:204) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:204) [14:24:16][Step 3/3] at org.scalatest.SuperEngine.runImpl(Engine.scala:545) [14:24:16][Step 3/3] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:204) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.org$scalatest$BeforeAndAfterAllConfigMap$$super$run(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.BeforeAndAfterAllConfigMap$class.liftedTree1$1(BeforeAndAfterAllConfigMap.scala:248) [14:24:16][Step 3/3] at org.scalatest.BeforeAndAfterAllConfigMap$class.run(BeforeAndAfterAllConfigMap.scala:247) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.org$scalatest$BeforeAndAfter$$super$run(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241) [14:24:16][Step 3/3] at mesosphere.marathon.integration.ResidentTaskIntegrationTest.run(ResidentTaskIntegrationTest.scala:16) [14:24:16][Step 3/3] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:444) [14:24:16][Step 3/3] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:651) [14:24:16][Step 3/3] at sbt.ForkMain$Run$2.call(ForkMain.java:294) [14:24:16][Step 3/3] at sbt.ForkMain$Run$2.call(ForkMain.java:284) [14:24:16][Step 3/3] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [14:24:16][Step 3/3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [14:24:16][Step 3/3] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [14:24:16][Step 3/3] at java.lang.Thread.run(Thread.java:745) [14:24:16][Step 3/3] + Given An app that writes into a persistent volume [14:24:16][Step 3/3] + When a task is launched [14:24:16][Step 3/3] + Then it successfully writes to the persistent volume and then finishes ```
build
resident tasks flaky test persistent volume will be re attached and keep state persistent volume will be re attached and keep state failed seconds milliseconds java lang assertionerror waiting for event deployment success to arrive took longer than seconds give up at mesosphere marathon integration setup waittestsupport next waittestsupport scala at mesosphere marathon integration setup waittestsupport waitfor waittestsupport scala at mesosphere marathon integration setup marathoncallbacktestsupport class waitforeventmatching marathoncallbacktestsupport scala at mesosphere marathon integration residenttaskintegrationtest waitforeventmatching residenttaskintegrationtest scala at mesosphere marathon integration setup marathoncallbacktestsupport class waitforeventwith marathoncallbacktestsupport scala at mesosphere marathon integration residenttaskintegrationtest waitforeventwith residenttaskintegrationtest scala at mesosphere marathon integration setup marathoncallbacktestsupport class waitforevent marathoncallbacktestsupport scala at mesosphere marathon integration residenttaskintegrationtest waitforevent residenttaskintegrationtest scala at mesosphere marathon integration residenttaskintegrationtest anonfun apply residenttaskintegrationtest scala at mesosphere marathon integration residenttaskintegrationtest anonfun apply residenttaskintegrationtest scala at mesosphere marathon integration residenttaskintegrationtest anonfun test apply mcv sp residenttaskintegrationtest scala at mesosphere marathon integration residenttaskintegrationtest anonfun test apply residenttaskintegrationtest scala at mesosphere marathon integration residenttaskintegrationtest anonfun test apply residenttaskintegrationtest scala at org scalatest transformer anonfun apply apply mcv sp transformer scala at org scalatest outcomeof class outcomeof outcomeof scala at org scalatest outcomeof outcomeof outcomeof scala at org scalatest transformer apply transformer scala at org scalatest transformer apply transformer scala at org scalatest funsuitelike anon apply funsuitelike scala at org scalatest suite class withfixture suite scala at org scalatest funsuite withfixture funsuite scala at org scalatest funsuitelike class invokewithfixture funsuitelike scala at org scalatest funsuitelike anonfun runtest apply funsuitelike scala at org scalatest funsuitelike anonfun runtest apply funsuitelike scala at org scalatest superengine runtestimpl engine scala at org scalatest funsuitelike class runtest funsuitelike scala at mesosphere marathon integration residenttaskintegrationtest org scalatest beforeandafter super runtest residenttaskintegrationtest scala at org scalatest beforeandafter class runtest beforeandafter scala at mesosphere marathon integration residenttaskintegrationtest runtest residenttaskintegrationtest scala at org scalatest funsuitelike anonfun runtests apply funsuitelike scala at org scalatest funsuitelike anonfun runtests apply funsuitelike scala at org scalatest superengine anonfun traversesubnodes apply engine scala at org scalatest superengine anonfun traversesubnodes apply engine scala at scala collection immutable list foreach list scala at org scalatest superengine traversesubnodes engine scala at org scalatest superengine org scalatest superengine runtestsinbranch engine scala at org scalatest superengine runtestsimpl engine scala at org scalatest funsuitelike class runtests funsuitelike scala at org scalatest funsuite runtests funsuite scala at org scalatest suite class run suite scala at org scalatest funsuite org scalatest funsuitelike super run funsuite scala at org scalatest funsuitelike anonfun run apply funsuitelike scala at org scalatest funsuitelike anonfun run apply funsuitelike scala at org scalatest superengine runimpl engine scala at org scalatest funsuitelike class run funsuitelike scala at mesosphere marathon integration residenttaskintegrationtest org scalatest beforeandafterallconfigmap super run residenttaskintegrationtest scala at org scalatest beforeandafterallconfigmap class beforeandafterallconfigmap scala at org scalatest beforeandafterallconfigmap class run beforeandafterallconfigmap scala at mesosphere marathon integration residenttaskintegrationtest org scalatest beforeandafter super run residenttaskintegrationtest scala at org scalatest beforeandafter class run beforeandafter scala at mesosphere marathon integration residenttaskintegrationtest run residenttaskintegrationtest scala at org scalatest tools framework org scalatest tools framework runsuite framework scala at org scalatest tools framework scalatesttask execute framework scala at sbt forkmain run call forkmain java at sbt forkmain run call forkmain java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java given an app that writes into a persistent volume when a task is launched then it successfully writes to the persistent volume and then finishes
6
3,874,113,297
IssuesEvent
2016-04-11 19:20:57
openshift/origin
https://api.github.com/repos/openshift/origin
closed
Update build config name annotation on builds when renaming build config
area/techdebt component/build priority/P2
We should update the build config name annotation when renaming the BC, so we don't loose the reference to BC the builds were started from. This might affect removing the builds when removing the build config. It might also break serial builds when we can't check what scheduling policy the BC associated with build has.
1
Update build config name annotation on builds when renaming build config - We should update the build config name annotation when renaming the BC, so we don't loose the reference to BC the builds were started from. This might affect removing the builds when removing the build config. It might also break serial builds when we can't check what scheduling policy the BC associated with build has.
build
update build config name annotation on builds when renaming build config we should update the build config name annotation when renaming the bc so we don t loose the reference to bc the builds were started from this might affect removing the builds when removing the build config it might also break serial builds when we can t check what scheduling policy the bc associated with build has
7
3,908,821,241
IssuesEvent
2016-04-19 17:07:05
ManageIQ/manageiq
https://api.github.com/repos/ManageIQ/manageiq
closed
Docker: switch the clone path from /manageiq to /var/www/miq/vmdb
build technical debt
cc @bazulay @fbladilo
1
Docker: switch the clone path from /manageiq to /var/www/miq/vmdb - cc @bazulay @fbladilo
build
docker switch the clone path from manageiq to var www miq vmdb cc bazulay fbladilo
8
4,282,752,050
IssuesEvent
2016-07-15 10:27:48
CartoDB/cartodb
https://api.github.com/repos/CartoDB/cartodb
opened
Separate models for different layers
Builder technical-debt
Right now we basically have at least three different kind of layers (in the builder context, not to be confused with the layer types in the map!), each with their own kind of view and behavior: There are various [kinds/types that defines a layer](https://github.com/CartoDB/cartodb/blob/f3d7f4be2af99974f2bfb0c71aa3eefbcc1614b0/lib/assets/javascripts/cartodb3/data/layer-types-and-kinds.js#L1-L17), but they can basically be put in three distinct categories: - layer-on-top (tiled) - data layers (cartodb, torque) - basemaps - (tiled, wms etc.) Right now we only have [one model](https://github.com/CartoDB/cartodb/blob/master/lib/assets/javascripts/cartodb3/data/layer-definition-model.js) that represents them all, even if their behavior and data varies vastly. Just like we have done with some other models I think it would make sense to have separate implementations for these three categories. This way we can extract some business logic related to basemaps out from the views (see #8901 for an example), as well as for basemaps not having to know anything about analysis. cc @xavijam @matallo @alonsogarciapablo
1
Separate models for different layers - Right now we basically have at least three different kind of layers (in the builder context, not to be confused with the layer types in the map!), each with their own kind of view and behavior: There are various [kinds/types that defines a layer](https://github.com/CartoDB/cartodb/blob/f3d7f4be2af99974f2bfb0c71aa3eefbcc1614b0/lib/assets/javascripts/cartodb3/data/layer-types-and-kinds.js#L1-L17), but they can basically be put in three distinct categories: - layer-on-top (tiled) - data layers (cartodb, torque) - basemaps - (tiled, wms etc.) Right now we only have [one model](https://github.com/CartoDB/cartodb/blob/master/lib/assets/javascripts/cartodb3/data/layer-definition-model.js) that represents them all, even if their behavior and data varies vastly. Just like we have done with some other models I think it would make sense to have separate implementations for these three categories. This way we can extract some business logic related to basemaps out from the views (see #8901 for an example), as well as for basemaps not having to know anything about analysis. cc @xavijam @matallo @alonsogarciapablo
build
separate models for different layers right now we basically have at least three different kind of layers in the builder context not to be confused with the layer types in the map each with their own kind of view and behavior there are various but they can basically be put in three distinct categories layer on top tiled data layers cartodb torque basemaps tiled wms etc right now we only have that represents them all even if their behavior and data varies vastly just like we have done with some other models i think it would make sense to have separate implementations for these three categories this way we can extract some business logic related to basemaps out from the views see for an example as well as for basemaps not having to know anything about analysis cc xavijam matallo alonsogarciapablo
9
4,388,555,625
IssuesEvent
2016-08-08 19:15:45
FineUploader/fine-uploader
https://api.github.com/repos/FineUploader/fine-uploader
closed
Allow more current version of node/npm for build, & clean up build
5 - Done build technical debt
- [x] npm script to clean the build artifacts - [x] ... to run linter(s) - [x] ... to build all.fineuploader.js for manual/automated tests + source maps - [x] ... to run unit tests (FF only for now) - [x] ... to generate zip files for all builds of FU for download - [x] ... to generate build for distribution via npm - [x] ensure current version of npm/node can be used to build - [x] Remove grunt & all grunt-related dependencies. - [x] update travis-ci scripts - [x] check build files again and [increase minification](https://davidwalsh.name/compress-uglify). - [x] update build instructions - [x] update downloads section of fineuploader.com It would be nice to allow more modern versions of node to work for development. The current solution is to use nvm to install 0.10.33 for FU development. I'll take this opportunity to clean up the build entirely and remove all of the grunt-related cruft too.
1
Allow more current version of node/npm for build, & clean up build - - [x] npm script to clean the build artifacts - [x] ... to run linter(s) - [x] ... to build all.fineuploader.js for manual/automated tests + source maps - [x] ... to run unit tests (FF only for now) - [x] ... to generate zip files for all builds of FU for download - [x] ... to generate build for distribution via npm - [x] ensure current version of npm/node can be used to build - [x] Remove grunt & all grunt-related dependencies. - [x] update travis-ci scripts - [x] check build files again and [increase minification](https://davidwalsh.name/compress-uglify). - [x] update build instructions - [x] update downloads section of fineuploader.com It would be nice to allow more modern versions of node to work for development. The current solution is to use nvm to install 0.10.33 for FU development. I'll take this opportunity to clean up the build entirely and remove all of the grunt-related cruft too.
build
allow more current version of node npm for build clean up build npm script to clean the build artifacts to run linter s to build all fineuploader js for manual automated tests source maps to run unit tests ff only for now to generate zip files for all builds of fu for download to generate build for distribution via npm ensure current version of npm node can be used to build remove grunt all grunt related dependencies update travis ci scripts check build files again and update build instructions update downloads section of fineuploader com it would be nice to allow more modern versions of node to work for development the current solution is to use nvm to install for fu development i ll take this opportunity to clean up the build entirely and remove all of the grunt related cruft too
10
4,928,457,071
IssuesEvent
2016-11-27 10:10:39
GoogleCloudPlatform/google-cloud-eclipse
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse
closed
Hamcrest jars in release
build high priority Tech Debt
Our release build bundles in 3 hamcrest jars. These should only be needed for tests, not for the production code. Can we get rid of them?
1
Hamcrest jars in release - Our release build bundles in 3 hamcrest jars. These should only be needed for tests, not for the production code. Can we get rid of them?
build
hamcrest jars in release our release build bundles in hamcrest jars these should only be needed for tests not for the production code can we get rid of them
13
5,194,460,423
IssuesEvent
2017-01-23 03:52:09
openshift/origin
https://api.github.com/repos/openshift/origin
closed
Refactor build controllers by controller type to match deployments
area/techdebt component/build kind/enhancement priority/P3
Create a separate package per controller/factory pair.
1
Refactor build controllers by controller type to match deployments - Create a separate package per controller/factory pair.
build
refactor build controllers by controller type to match deployments create a separate package per controller factory pair
14
5,206,063,714
IssuesEvent
2017-01-24 19:36:31
weaveworks/scope
https://api.github.com/repos/weaveworks/scope
closed
Add unit tests for the ECS reporter
component/build ecs techdebt
Leftover from https://github.com/weaveworks/scope/pull/2026
1
Add unit tests for the ECS reporter - Leftover from https://github.com/weaveworks/scope/pull/2026
build
add unit tests for the ecs reporter leftover from
15
5,207,015,869
IssuesEvent
2017-01-24 22:13:54
GoogleCloudPlatform/google-cloud-eclipse
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse
closed
Get freemarker from maven central if possible
build Tech Debt
rather than bundling it
1
Get freemarker from maven central if possible - rather than bundling it
build
get freemarker from maven central if possible rather than bundling it
16
5,280,869,280
IssuesEvent
2017-02-07 15:14:45
weaveworks/scope
https://api.github.com/repos/weaveworks/scope
closed
Improve debuggability of integration tests
component/build techdebt
Adding and debugging integration tests is a bit hairy. I tend to enable ssh in the CircleCI builds. It would be good if we had the local VM creation automated through `vagrant up`, like in weave: https://github.com/weaveworks/weave/blob/master/test/README.md
1
Improve debuggability of integration tests - Adding and debugging integration tests is a bit hairy. I tend to enable ssh in the CircleCI builds. It would be good if we had the local VM creation automated through `vagrant up`, like in weave: https://github.com/weaveworks/weave/blob/master/test/README.md
build
improve debuggability of integration tests adding and debugging integration tests is a bit hairy i tend to enable ssh in the circleci builds it would be good if we had the local vm creation automated through vagrant up like in weave
17
5,397,069,445
IssuesEvent
2017-02-27 13:43:44
weaveworks/scope
https://api.github.com/repos/weaveworks/scope
closed
Move common directory to its own repo
component/build techdebt
- Move it to https://github.com/weaveworks/common - Move the packages out of the common directory - Preserve history - Remove stuff that doesn't make sense to be in common - Make sure it builds (locally & with CI) - Update scope Motivation is that: - we are using this code in a variety of other code bases which have little to do with scope - we have *duplicated* some of this code in other code bases, leading to bugs and unexpected behaviour
1
Move common directory to its own repo - - Move it to https://github.com/weaveworks/common - Move the packages out of the common directory - Preserve history - Remove stuff that doesn't make sense to be in common - Make sure it builds (locally & with CI) - Update scope Motivation is that: - we are using this code in a variety of other code bases which have little to do with scope - we have *duplicated* some of this code in other code bases, leading to bugs and unexpected behaviour
build
move common directory to its own repo move it to move the packages out of the common directory preserve history remove stuff that doesn t make sense to be in common make sure it builds locally with ci update scope motivation is that we are using this code in a variety of other code bases which have little to do with scope we have duplicated some of this code in other code bases leading to bugs and unexpected behaviour
18
5,475,708,465
IssuesEvent
2017-03-11 14:02:40
FakeItEasy/FakeItEasy
https://api.github.com/repos/FakeItEasy/FakeItEasy
opened
Drop resource files and move message strings to the code
build tech-debt
As discussed in #1013 Error messages are currently in a resource file (.resx), but are not localized and there's no plan to localize them. Also, the custom tool that generates code from the resource files produces code that is not compatible with .NET Standard 1.6, so it's currently disconnected. This means that adding a string to the resource file no longer updates the corresponding code file. Since we don't need the features provided by resx files and can't use the built-in custom tool, it would make sense to convert the resources to string constants in a class. We could also use the approach suggested by @blairconrad in https://github.com/FakeItEasy/FakeItEasy/pull/1013#issuecomment-285865877
1
Drop resource files and move message strings to the code - As discussed in #1013 Error messages are currently in a resource file (.resx), but are not localized and there's no plan to localize them. Also, the custom tool that generates code from the resource files produces code that is not compatible with .NET Standard 1.6, so it's currently disconnected. This means that adding a string to the resource file no longer updates the corresponding code file. Since we don't need the features provided by resx files and can't use the built-in custom tool, it would make sense to convert the resources to string constants in a class. We could also use the approach suggested by @blairconrad in https://github.com/FakeItEasy/FakeItEasy/pull/1013#issuecomment-285865877
build
drop resource files and move message strings to the code as discussed in error messages are currently in a resource file resx but are not localized and there s no plan to localize them also the custom tool that generates code from the resource files produces code that is not compatible with net standard so it s currently disconnected this means that adding a string to the resource file no longer updates the corresponding code file since we don t need the features provided by resx files and can t use the built in custom tool it would make sense to convert the resources to string constants in a class we could also use the approach suggested by blairconrad in
19
5,868,275,987
IssuesEvent
2017-05-14 11:09:27
FakeItEasy/FakeItEasy
https://api.github.com/repos/FakeItEasy/FakeItEasy
closed
Generate external assembly with extension points for integration tests
build in-progress P2 tech-debt
As noted in https://github.com/FakeItEasy/FakeItEasy/pull/1090#issuecomment-300454071, the current assembly will not be compiled if a user runs the integration tests from the IDE without doing a command-line build, and then the tests would fail. We could generate the assembly as part of the test fixture setup.
1
Generate external assembly with extension points for integration tests - As noted in https://github.com/FakeItEasy/FakeItEasy/pull/1090#issuecomment-300454071, the current assembly will not be compiled if a user runs the integration tests from the IDE without doing a command-line build, and then the tests would fail. We could generate the assembly as part of the test fixture setup.
build
generate external assembly with extension points for integration tests as noted in the current assembly will not be compiled if a user runs the integration tests from the ide without doing a command line build and then the tests would fail we could generate the assembly as part of the test fixture setup
21
6,188,154,399
IssuesEvent
2017-07-04 09:26:51
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
closed
Link checking within docs should be automated
build debt docs qa
Related to #449 Documentation links need to be checked via a tool as part of an automated test. This needs to be part of the test pipe-line so that we can gain confidence in the documentation quality with regards to internal linking. ## Context Issues are being raised relating to broken links; this can be avoided by incorporating a link checker within the automated tests ## Expected Behavior If there is a broken link in the documentation, a test will fail ## Actual Behavior No tests to fail, documents released with unintentional broken links ## Possible Fix Incorporate a link checker into testing pipeline
1
Link checking within docs should be automated - Related to #449 Documentation links need to be checked via a tool as part of an automated test. This needs to be part of the test pipe-line so that we can gain confidence in the documentation quality with regards to internal linking. ## Context Issues are being raised relating to broken links; this can be avoided by incorporating a link checker within the automated tests ## Expected Behavior If there is a broken link in the documentation, a test will fail ## Actual Behavior No tests to fail, documents released with unintentional broken links ## Possible Fix Incorporate a link checker into testing pipeline
build
link checking within docs should be automated related to documentation links need to be checked via a tool as part of an automated test this needs to be part of the test pipe line so that we can gain confidence in the documentation quality with regards to internal linking context issues are being raised relating to broken links this can be avoided by incorporating a link checker within the automated tests expected behavior if there is a broken link in the documentation a test will fail actual behavior no tests to fail documents released with unintentional broken links possible fix incorporate a link checker into testing pipeline
24
6,342,832,192
IssuesEvent
2017-07-27 16:15:38
yahoo/fili
https://api.github.com/repos/yahoo/fili
closed
Maven warnings on build
BUILD TECH-DEBT
Parent POM warnings from `mvn install` Version: Apache Maven 3.5.0 ``` [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-system-config:jar:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-core:jar:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-navi:jar:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili:jar:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-wikipedia-example:war:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-generic-example:war:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-parent-pom:pom:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. @ line 768, column 32 [WARNING] [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build. [WARNING] [WARNING] For this reason, future Maven versions might no longer support building such malformed projects. ``` it looks like we have some potential build issues with the current config.
1
Maven warnings on build - Parent POM warnings from `mvn install` Version: Apache Maven 3.5.0 ``` [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-system-config:jar:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-core:jar:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-navi:jar:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili:jar:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-wikipedia-example:war:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-generic-example:war:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. [WARNING] [WARNING] Some problems were encountered while building the effective model for com.yahoo.fili:fili-parent-pom:pom:0.9-SNAPSHOT [WARNING] Reporting configuration should be done in <reporting> section, not in maven-site-plugin <configuration> as reportPlugins parameter. @ line 768, column 32 [WARNING] [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build. [WARNING] [WARNING] For this reason, future Maven versions might no longer support building such malformed projects. ``` it looks like we have some potential build issues with the current config.
build
maven warnings on build parent pom warnings from mvn install version apache maven some problems were encountered while building the effective model for com yahoo fili fili system config jar snapshot reporting configuration should be done in section not in maven site plugin as reportplugins parameter some problems were encountered while building the effective model for com yahoo fili fili core jar snapshot reporting configuration should be done in section not in maven site plugin as reportplugins parameter some problems were encountered while building the effective model for com yahoo fili fili navi jar snapshot reporting configuration should be done in section not in maven site plugin as reportplugins parameter some problems were encountered while building the effective model for com yahoo fili fili jar snapshot reporting configuration should be done in section not in maven site plugin as reportplugins parameter some problems were encountered while building the effective model for com yahoo fili fili wikipedia example war snapshot reporting configuration should be done in section not in maven site plugin as reportplugins parameter some problems were encountered while building the effective model for com yahoo fili fili generic example war snapshot reporting configuration should be done in section not in maven site plugin as reportplugins parameter some problems were encountered while building the effective model for com yahoo fili fili parent pom pom snapshot reporting configuration should be done in section not in maven site plugin as reportplugins parameter line column it is highly recommended to fix these problems because they threaten the stability of your build for this reason future maven versions might no longer support building such malformed projects it looks like we have some potential build issues with the current config
25
6,497,910,317
IssuesEvent
2017-08-22 15:28:58
openshift/origin
https://api.github.com/repos/openshift/origin
closed
refactor build admission plugins
area/techdebt component/build priority/P3
they're not run at admission any more, and they run in the context of the build controller - so the code should move location, and they should take advantage of having access to the build object rather than doing lots of serialisation/deserialisation
1
refactor build admission plugins - they're not run at admission any more, and they run in the context of the build controller - so the code should move location, and they should take advantage of having access to the build object rather than doing lots of serialisation/deserialisation
build
refactor build admission plugins they re not run at admission any more and they run in the context of the build controller so the code should move location and they should take advantage of having access to the build object rather than doing lots of serialisation deserialisation
26
6,498,503,681
IssuesEvent
2017-08-22 17:41:14
vterm/vterm
https://api.github.com/repos/vterm/vterm
opened
Build for deb
debt errors/build platform/linux
Because of #58, we had to remove the "deb" target for Linux builds. This needs to be solved.
1
Build for deb - Because of #58, we had to remove the "deb" target for Linux builds. This needs to be solved.
build
build for deb because of we had to remove the deb target for linux builds this needs to be solved
27
6,515,403,042
IssuesEvent
2017-08-26 15:18:17
openshift/ansible-service-broker
https://api.github.com/repos/openshift/ansible-service-broker
closed
Fix make prep_local to handle the auth directory and files.
bug build tech-debt
`make run` won't work quite right with auth enabled.
1
Fix make prep_local to handle the auth directory and files. - `make run` won't work quite right with auth enabled.
build
fix make prep local to handle the auth directory and files make run won t work quite right with auth enabled
28
6,552,986,166
IssuesEvent
2017-09-05 20:33:42
openshift/origin
https://api.github.com/repos/openshift/origin
closed
build controller: use retry in buildconfig policy to determine what build to run next
area/techdebt component/build priority/P2
With https://github.com/openshift/origin/pull/16055 we are reverting to using the REST client to determine which build to run next when transitioning a build to a completed phase. A better solution would be to keep using the cache and simply retry when determining the build to run next and a build is still running.
1
build controller: use retry in buildconfig policy to determine what build to run next - With https://github.com/openshift/origin/pull/16055 we are reverting to using the REST client to determine which build to run next when transitioning a build to a completed phase. A better solution would be to keep using the cache and simply retry when determining the build to run next and a build is still running.
build
build controller use retry in buildconfig policy to determine what build to run next with we are reverting to using the rest client to determine which build to run next when transitioning a build to a completed phase a better solution would be to keep using the cache and simply retry when determining the build to run next and a build is still running
29
6,843,161,771
IssuesEvent
2017-11-12 12:22:20
adamralph/liteguard
https://api.github.com/repos/adamralph/liteguard
closed
Fix up test project
build in-progress tech-debt
- Reference Test SDK - Reference xunit VS runner - Replace xunit.runner.console in build packages with .NET CLI tool reference in project - Simplify test build target
1
Fix up test project - - Reference Test SDK - Reference xunit VS runner - Replace xunit.runner.console in build packages with .NET CLI tool reference in project - Simplify test build target
build
fix up test project reference test sdk reference xunit vs runner replace xunit runner console in build packages with net cli tool reference in project simplify test build target
30
6,888,368,264
IssuesEvent
2017-11-22 05:26:57
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
closed
Adopt absl common libraries
build enhancement tech debt
Google's common C++ libraries are now OSS: https://github.com/abseil/abseil-cpp. This a lightweight alternative to something like Boost which can replace existing adhoc utils such as string join/split with standard implementations. This issue will track a proposal to: 1. Add absl as an external dependency to Envoy (it already has Bazel support). 2. Replace existing implementation of duplicate functionality with absl alternatives. Places to look at include `common/common/utility.h`, `Optional`, mutexes, `make_unique`, (others?) 3. Use absl utilities going forward.
1
Adopt absl common libraries - Google's common C++ libraries are now OSS: https://github.com/abseil/abseil-cpp. This a lightweight alternative to something like Boost which can replace existing adhoc utils such as string join/split with standard implementations. This issue will track a proposal to: 1. Add absl as an external dependency to Envoy (it already has Bazel support). 2. Replace existing implementation of duplicate functionality with absl alternatives. Places to look at include `common/common/utility.h`, `Optional`, mutexes, `make_unique`, (others?) 3. Use absl utilities going forward.
build
adopt absl common libraries google s common c libraries are now oss this a lightweight alternative to something like boost which can replace existing adhoc utils such as string join split with standard implementations this issue will track a proposal to add absl as an external dependency to envoy it already has bazel support replace existing implementation of duplicate functionality with absl alternatives places to look at include common common utility h optional mutexes make unique others use absl utilities going forward
31
6,949,883,235
IssuesEvent
2017-12-06 08:44:54
syndesisio/syndesis
https://api.github.com/repos/syndesisio/syndesis
closed
Update mvnw to 3.5.2
cat/bug cat/build cat/techdebt size/s
as 3.5.0 has timestamp issue when deploying snapshots.
1
Update mvnw to 3.5.2 - as 3.5.0 has timestamp issue when deploying snapshots.
build
update mvnw to as has timestamp issue when deploying snapshots
32
7,059,433,467
IssuesEvent
2018-01-05 01:35:23
openshift/origin
https://api.github.com/repos/openshift/origin
closed
Factor out duplicate code in build pod creation strategies
area/techdebt component/build kind/enhancement priority/P2
Factor out overlap in the 3 build creation strategies (custom, s2i, docker)
1
Factor out duplicate code in build pod creation strategies - Factor out overlap in the 3 build creation strategies (custom, s2i, docker)
build
factor out duplicate code in build pod creation strategies factor out overlap in the build creation strategies custom docker
34
7,113,541,324
IssuesEvent
2018-01-17 20:51:25
openshift/origin
https://api.github.com/repos/openshift/origin
closed
`oc start-build` times out with chained binary builds
area/techdebt component/build priority/P2
For #9800 I attempted to make a set of chained binary builds. These would start with a base Fedora image, and then successively make changes in configuration. I was unable to use `oc start-build` for anything other than the first step, and thus had to use `docker build` instead. ##### Version ``` openshift v1.3.0-alpha.2+0b34629 kubernetes v1.3.0+57fb9ac etcd 2.3.0+git oc v1.3.0-alpha.2+1173ef4-dirty kubernetes v1.3.0+57fb9ac features: Basic-Auth GSSAPI Kerberos SPNEGO ``` ##### Steps To Reproduce Create the following binary builds using build configs and Dockerfiles that reference them: 1. `Fedora:23` docker to image stream `Tag1` 2. Image stream `Tag1` to image stream `Tag2` Use `oc start-build` to create both. ##### Current Result `oc start-build` works for first build and timesouts on the second. For #9800 the `openshift.log` showed ``` E0713 11:56:08.733511 8876 pod_workers.go:183] Error syncing pod 39165bf1-4907-11e6-8494-507b9dac97ff, skipping: timeout expired waiting for volumes to attach/mount for pod "fedora-gssapi-kerberos-1-build"/"gssapiproxy". list of unattached/unmounted volumes=[builder-dockercfg-a97gq-push] ``` ##### Expected Result Both builds are successful. cc @liggitt @stevekuznetsov
1
`oc start-build` times out with chained binary builds - For #9800 I attempted to make a set of chained binary builds. These would start with a base Fedora image, and then successively make changes in configuration. I was unable to use `oc start-build` for anything other than the first step, and thus had to use `docker build` instead. ##### Version ``` openshift v1.3.0-alpha.2+0b34629 kubernetes v1.3.0+57fb9ac etcd 2.3.0+git oc v1.3.0-alpha.2+1173ef4-dirty kubernetes v1.3.0+57fb9ac features: Basic-Auth GSSAPI Kerberos SPNEGO ``` ##### Steps To Reproduce Create the following binary builds using build configs and Dockerfiles that reference them: 1. `Fedora:23` docker to image stream `Tag1` 2. Image stream `Tag1` to image stream `Tag2` Use `oc start-build` to create both. ##### Current Result `oc start-build` works for first build and timesouts on the second. For #9800 the `openshift.log` showed ``` E0713 11:56:08.733511 8876 pod_workers.go:183] Error syncing pod 39165bf1-4907-11e6-8494-507b9dac97ff, skipping: timeout expired waiting for volumes to attach/mount for pod "fedora-gssapi-kerberos-1-build"/"gssapiproxy". list of unattached/unmounted volumes=[builder-dockercfg-a97gq-push] ``` ##### Expected Result Both builds are successful. cc @liggitt @stevekuznetsov
build
oc start build times out with chained binary builds for i attempted to make a set of chained binary builds these would start with a base fedora image and then successively make changes in configuration i was unable to use oc start build for anything other than the first step and thus had to use docker build instead version openshift alpha kubernetes etcd git oc alpha dirty kubernetes features basic auth gssapi kerberos spnego steps to reproduce create the following binary builds using build configs and dockerfiles that reference them fedora docker to image stream image stream to image stream use oc start build to create both current result oc start build works for first build and timesouts on the second for the openshift log showed pod workers go error syncing pod skipping timeout expired waiting for volumes to attach mount for pod fedora gssapi kerberos build gssapiproxy list of unattached unmounted volumes expected result both builds are successful cc liggitt stevekuznetsov
35
7,113,649,681
IssuesEvent
2018-01-17 21:15:17
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
closed
Followup items from Dockerhub registry migration
build help wanted tech debt
Following the Dockerhub migration from `lyft/envoy` to `envoyproxy/envoy`, the following items are in response to questions that have popped up on Slack. * Add a notice to [the old registry](https://hub.docker.com/r/lyft/envoy) that tells users about [the new registry](https://hub.docker.com/r/envoyproxy/envoy/) * Investigate if [the old registry](https://hub.docker.com/r/lyft/envoy) can be redirected to [the new registry](https://hub.docker.com/r/envoyproxy/envoy)
1
Followup items from Dockerhub registry migration - Following the Dockerhub migration from `lyft/envoy` to `envoyproxy/envoy`, the following items are in response to questions that have popped up on Slack. * Add a notice to [the old registry](https://hub.docker.com/r/lyft/envoy) that tells users about [the new registry](https://hub.docker.com/r/envoyproxy/envoy/) * Investigate if [the old registry](https://hub.docker.com/r/lyft/envoy) can be redirected to [the new registry](https://hub.docker.com/r/envoyproxy/envoy)
build
followup items from dockerhub registry migration following the dockerhub migration from lyft envoy to envoyproxy envoy the following items are in response to questions that have popped up on slack add a notice to that tells users about investigate if can be redirected to
36
7,370,014,868
IssuesEvent
2018-03-13 06:27:14
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
closed
Build test bananas
build debt help wanted qa
There are quite a few .bna files checked in for test purposes ## Context Binary files in git are not ideal for dealing with updates/merge conflicts ## Expected Behavior .bna files should be created in a pretest script ## Actual Behavior .bna files are created manually and checked in
1
Build test bananas - There are quite a few .bna files checked in for test purposes ## Context Binary files in git are not ideal for dealing with updates/merge conflicts ## Expected Behavior .bna files should be created in a pretest script ## Actual Behavior .bna files are created manually and checked in
build
build test bananas there are quite a few bna files checked in for test purposes context binary files in git are not ideal for dealing with updates merge conflicts expected behavior bna files should be created in a pretest script actual behavior bna files are created manually and checked in
37
7,371,442,082
IssuesEvent
2018-03-13 11:46:50
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
opened
Failures in before step for FV tests
build debt qa
The following is causing test fails: ``` Historian "before all" hook: Error: Error trying invoke business network. Error: Peer localhost:7051 has rejected transaction 'dc3adabf8b6ab0ba82aff66dfbbfc72cd5a861959a33b71c1244b99353731c60' with code ENDORSEMENT_POLICY_FAILURE at _initializeChannel.then.then.then.then.catch (/home/travis/build/hyperledger/composer/packages/composer-connector-hlfv1/lib/hlfconnection.js:1004:34) at <anonymous> at process._tickCallback (internal/process/next_tick.js:188:7) ``` We could/should be wrapping sensitive functions like the above in a sensible way.
1
Failures in before step for FV tests - The following is causing test fails: ``` Historian "before all" hook: Error: Error trying invoke business network. Error: Peer localhost:7051 has rejected transaction 'dc3adabf8b6ab0ba82aff66dfbbfc72cd5a861959a33b71c1244b99353731c60' with code ENDORSEMENT_POLICY_FAILURE at _initializeChannel.then.then.then.then.catch (/home/travis/build/hyperledger/composer/packages/composer-connector-hlfv1/lib/hlfconnection.js:1004:34) at <anonymous> at process._tickCallback (internal/process/next_tick.js:188:7) ``` We could/should be wrapping sensitive functions like the above in a sensible way.
build
failures in before step for fv tests the following is causing test fails historian before all hook error error trying invoke business network error peer localhost has rejected transaction with code endorsement policy failure at initializechannel then then then then catch home travis build hyperledger composer packages composer connector lib hlfconnection js at at process tickcallback internal process next tick js we could should be wrapping sensitive functions like the above in a sensible way
38
7,375,249,476
IssuesEvent
2018-03-13 23:26:12
eclipse/eclipse.jdt.ls
https://api.github.com/repos/eclipse/eclipse.jdt.ls
opened
Update platform to Photon M6
build/infra debt
Updating the TP to platform M6 is not trivial since it causes some compilation problems <img width="785" alt="screen shot 2018-03-13 at 7 24 52 pm" src="https://user-images.githubusercontent.com/148698/37375046-51c1aa2e-26f4-11e8-90f9-8757f3412526.png">
1
Update platform to Photon M6 - Updating the TP to platform M6 is not trivial since it causes some compilation problems <img width="785" alt="screen shot 2018-03-13 at 7 24 52 pm" src="https://user-images.githubusercontent.com/148698/37375046-51c1aa2e-26f4-11e8-90f9-8757f3412526.png">
build
update platform to photon updating the tp to platform is not trivial since it causes some compilation problems img width alt screen shot at pm src
39
7,381,896,213
IssuesEvent
2018-03-15 01:27:18
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
opened
Replace coverity
build help wanted tech debt
It looks like coverity scan might be gone. We will need to replace it with some other static analysis tool. Need to investigate which one.
1
Replace coverity - It looks like coverity scan might be gone. We will need to replace it with some other static analysis tool. Need to investigate which one.
build
replace coverity it looks like coverity scan might be gone we will need to replace it with some other static analysis tool need to investigate which one
40
7,574,754,714
IssuesEvent
2018-04-23 22:06:34
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
closed
Replace git_sha_rewriter with Bazel native approach
build help wanted tech debt
Bazel lost its forced md5 stamping a while back, so we should be able to use a `genrule` and `-Wl,@$(location :label_for_the_file)` in linkopts to replace the https://github.com/envoyproxy/envoy/blob/master/tools/git_sha_rewriter.py hack.
1
Replace git_sha_rewriter with Bazel native approach - Bazel lost its forced md5 stamping a while back, so we should be able to use a `genrule` and `-Wl,@$(location :label_for_the_file)` in linkopts to replace the https://github.com/envoyproxy/envoy/blob/master/tools/git_sha_rewriter.py hack.
build
replace git sha rewriter with bazel native approach bazel lost its forced stamping a while back so we should be able to use a genrule and wl location label for the file in linkopts to replace the hack
41
7,610,838,703
IssuesEvent
2018-05-01 10:41:29
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
closed
Improve ability to test build scripts outside travis
P2 build debt stale
## Context Still have not had a successful release build since splitting to latest and next versions ## Expected Behavior Builds should work reliably ## Actual Behavior Not so much ## Possible Fix Structure build scripts so that they are more amenable to testing outside travis against staging registries/environments
1
Improve ability to test build scripts outside travis - ## Context Still have not had a successful release build since splitting to latest and next versions ## Expected Behavior Builds should work reliably ## Actual Behavior Not so much ## Possible Fix Structure build scripts so that they are more amenable to testing outside travis against staging registries/environments
build
improve ability to test build scripts outside travis context still have not had a successful release build since splitting to latest and next versions expected behavior builds should work reliably actual behavior not so much possible fix structure build scripts so that they are more amenable to testing outside travis against staging registries environments
42
7,655,275,320
IssuesEvent
2018-05-10 12:40:25
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
opened
Enhance build release script
build debt
The build release has experienced too many issues with failures. The resulting problem is that we cannot 'pick up' where the previous build failed, which is necessary if the release build fails. The main failure reason is with timeouts, and we cannot assume that this will just go away. We need to modify the scripts to enable a pick-up of the failed build, then continue with the process and the key publishing stages. - [ ] Modify the npm publish to check for existence of an item, then conditionally publish - [ ] Check for existence of each docker image, then conditionally build/publish - [ ] Check the version of the hosted playground and then conditionally update
1
Enhance build release script - The build release has experienced too many issues with failures. The resulting problem is that we cannot 'pick up' where the previous build failed, which is necessary if the release build fails. The main failure reason is with timeouts, and we cannot assume that this will just go away. We need to modify the scripts to enable a pick-up of the failed build, then continue with the process and the key publishing stages. - [ ] Modify the npm publish to check for existence of an item, then conditionally publish - [ ] Check for existence of each docker image, then conditionally build/publish - [ ] Check the version of the hosted playground and then conditionally update
build
enhance build release script the build release has experienced too many issues with failures the resulting problem is that we cannot pick up where the previous build failed which is necessary if the release build fails the main failure reason is with timeouts and we cannot assume that this will just go away we need to modify the scripts to enable a pick up of the failed build then continue with the process and the key publishing stages modify the npm publish to check for existence of an item then conditionally publish check for existence of each docker image then conditionally build publish check the version of the hosted playground and then conditionally update
43
7,675,144,752
IssuesEvent
2018-05-15 07:42:31
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
closed
PeerDeps will not automatically update
P2 bug build debt
the `npm run pkgbump` command that we rely on in the release build will only update the peerDependancies if they are also a devDependancy. Which is a lerna thing it would seem. This is also true if a manual version bump is made. The proposal here is to add these back into the devDeps to ensure that they are updated automatically.
1
PeerDeps will not automatically update - the `npm run pkgbump` command that we rely on in the release build will only update the peerDependancies if they are also a devDependancy. Which is a lerna thing it would seem. This is also true if a manual version bump is made. The proposal here is to add these back into the devDeps to ensure that they are updated automatically.
build
peerdeps will not automatically update the npm run pkgbump command that we rely on in the release build will only update the peerdependancies if they are also a devdependancy which is a lerna thing it would seem this is also true if a manual version bump is made the proposal here is to add these back into the devdeps to ensure that they are updated automatically
44
7,770,455,054
IssuesEvent
2018-06-04 08:52:48
hyperledger/composer
https://api.github.com/repos/hyperledger/composer
closed
Enhance build release script
build debt
The build release has experienced too many issues with failures. The resulting problem is that we cannot 'pick up' where the previous build failed, which is necessary if the release build fails. The main failure reason is with timeouts, and we cannot assume that this will just go away. We need to modify the scripts to enable a pick-up of the failed build, then continue with the process and the key publishing stages. - [ ] Modify the npm publish to check for existence of an item, then conditionally publish - [ ] Check for existence of each docker image, then conditionally build/publish - [ ] Check the version of the hosted playground and then conditionally update For item 1: - we can build a list of packages that are to be ignored on the publish phase, augment the standard ignore packages and then run the usual `lerna --exec command ` with the new list of packages to ignore (which may or may not be everything) For item 2: - we can modify the list of `DOCKER_IMAGES` to be built, by stripping out existing ones if they exist. Then run the existing publish loop over the list that may or may not be empty. For item 3: - use `cf app composer-playground` to retrieve the version from the hosted playground. Once that is available, we can conditionally publish if the version is incorrect. The final item is the version bump, which in theory is the last thing to be performed. This one is not so much of a concern as it can be run manually quite easily, and in theory because it is the last item in the chain, a series of respins will end up touching the version bump.
1
Enhance build release script - The build release has experienced too many issues with failures. The resulting problem is that we cannot 'pick up' where the previous build failed, which is necessary if the release build fails. The main failure reason is with timeouts, and we cannot assume that this will just go away. We need to modify the scripts to enable a pick-up of the failed build, then continue with the process and the key publishing stages. - [ ] Modify the npm publish to check for existence of an item, then conditionally publish - [ ] Check for existence of each docker image, then conditionally build/publish - [ ] Check the version of the hosted playground and then conditionally update For item 1: - we can build a list of packages that are to be ignored on the publish phase, augment the standard ignore packages and then run the usual `lerna --exec command ` with the new list of packages to ignore (which may or may not be everything) For item 2: - we can modify the list of `DOCKER_IMAGES` to be built, by stripping out existing ones if they exist. Then run the existing publish loop over the list that may or may not be empty. For item 3: - use `cf app composer-playground` to retrieve the version from the hosted playground. Once that is available, we can conditionally publish if the version is incorrect. The final item is the version bump, which in theory is the last thing to be performed. This one is not so much of a concern as it can be run manually quite easily, and in theory because it is the last item in the chain, a series of respins will end up touching the version bump.
build
enhance build release script the build release has experienced too many issues with failures the resulting problem is that we cannot pick up where the previous build failed which is necessary if the release build fails the main failure reason is with timeouts and we cannot assume that this will just go away we need to modify the scripts to enable a pick up of the failed build then continue with the process and the key publishing stages modify the npm publish to check for existence of an item then conditionally publish check for existence of each docker image then conditionally build publish check the version of the hosted playground and then conditionally update for item we can build a list of packages that are to be ignored on the publish phase augment the standard ignore packages and then run the usual lerna exec command with the new list of packages to ignore which may or may not be everything for item we can modify the list of docker images to be built by stripping out existing ones if they exist then run the existing publish loop over the list that may or may not be empty for item use cf app composer playground to retrieve the version from the hosted playground once that is available we can conditionally publish if the version is incorrect the final item is the version bump which in theory is the last thing to be performed this one is not so much of a concern as it can be run manually quite easily and in theory because it is the last item in the chain a series of respins will end up touching the version bump
48
7,936,698,690
IssuesEvent
2018-07-09 10:15:40
SpineEventEngine/base
https://api.github.com/repos/SpineEventEngine/base
closed
Ship `base` With Bundled Validating Builders
/Build /Compiler tech. debt
Currently, `base` does not build own `VBuilder`s, since it does not use model compiler in the build process. The problem with this is the split package issue when migrating to Java 9. It is suggested to build and bundle `VBuilder`s for `base` externally, i.e. in the publication scripts.
1
Ship `base` With Bundled Validating Builders - Currently, `base` does not build own `VBuilder`s, since it does not use model compiler in the build process. The problem with this is the split package issue when migrating to Java 9. It is suggested to build and bundle `VBuilder`s for `base` externally, i.e. in the publication scripts.
build
ship base with bundled validating builders currently base does not build own vbuilder s since it does not use model compiler in the build process the problem with this is the split package issue when migrating to java it is suggested to build and bundle vbuilder s for base externally i e in the publication scripts
49
8,051,697,859
IssuesEvent
2018-08-01 16:54:35
yahoo/fili
https://api.github.com/repos/yahoo/fili
opened
FindBugs plugin never runs
BUILD TECH-DEBT
When you manually the findbugs check it fails ```bash $ mvn findbugs:check ``` we should fix/exclude the bugs it finds and include it as part of the build process
1
FindBugs plugin never runs - When you manually the findbugs check it fails ```bash $ mvn findbugs:check ``` we should fix/exclude the bugs it finds and include it as part of the build process
build
findbugs plugin never runs when you manually the findbugs check it fails bash mvn findbugs check we should fix exclude the bugs it finds and include it as part of the build process
51
8,160,434,966
IssuesEvent
2018-08-24 01:25:51
GoogleCloudPlatform/google-cloud-eclipse
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse
closed
Remove Neon support
Tech Debt build high priority
Might want to remove the docker editor in the same or an earlier PR.
1
Remove Neon support - Might want to remove the docker editor in the same or an earlier PR.
build
remove neon support might want to remove the docker editor in the same or an earlier pr
52
8,286,445,499
IssuesEvent
2018-09-19 04:52:39
goharbor/harbor
https://api.github.com/repos/goharbor/harbor
closed
UI builder container should not influence the host status
area/build area/ui kind/debt target/1.7.0
There were contaminate the host's file state when building UI. The whole process should be done in the container.
1
UI builder container should not influence the host status - There were contaminate the host's file state when building UI. The whole process should be done in the container.
build
ui builder container should not influence the host status there were contaminate the host s file state when building ui the whole process should be done in the container
53
8,302,113,396
IssuesEvent
2018-09-21 13:39:04
debrief/debrief
https://api.github.com/repos/debrief/debrief
closed
Project file missing from distribution
Technical Debt bug sys_build
The sample data folder (in `org.mwc.cmap.combined.features\root_installs`) includes a `.project` file that allows the folder to be imported into an installed Debrief, reducing one step for the analyst. But, the `.project` file isn't distributed with Debrief. Investigate what is preventing its inclusion, and correct the problem.
1
Project file missing from distribution - The sample data folder (in `org.mwc.cmap.combined.features\root_installs`) includes a `.project` file that allows the folder to be imported into an installed Debrief, reducing one step for the analyst. But, the `.project` file isn't distributed with Debrief. Investigate what is preventing its inclusion, and correct the problem.
build
project file missing from distribution the sample data folder in org mwc cmap combined features root installs includes a project file that allows the folder to be imported into an installed debrief reducing one step for the analyst but the project file isn t distributed with debrief investigate what is preventing its inclusion and correct the problem
54
8,336,803,475
IssuesEvent
2018-09-28 09:02:06
SpineEventEngine/base
https://api.github.com/repos/SpineEventEngine/base
closed
Return unit tests of `validate` package back under the `base` module
/Build tech. debt
See `smoke-tests/validation-rules/src/test/.../validate`. These tests should be under the main code tests. Otherwise API of the validators become fragile β€” there is no way of knowing it's used even in our tests.
1
Return unit tests of `validate` package back under the `base` module - See `smoke-tests/validation-rules/src/test/.../validate`. These tests should be under the main code tests. Otherwise API of the validators become fragile β€” there is no way of knowing it's used even in our tests.
build
return unit tests of validate package back under the base module see smoke tests validation rules src test validate these tests should be under the main code tests otherwise api of the validators become fragile β€” there is no way of knowing it s used even in our tests
55
8,596,578,091
IssuesEvent
2018-11-15 16:18:59
angelozerr/lsp4xml
https://api.github.com/repos/angelozerr/lsp4xml
opened
Enable sonarcloud.io on the repository
build debt
The idea is to enable SonarQube static analysis on the project, so we can detect common mistakes or potential bugs in advance. See https://sonarcloud.io/about. Requires the repository owner to grant access to this repo. So only @angelozerr can do it. I'll look into enabling code coverage once we get sonarcloud.io up and running.
1
Enable sonarcloud.io on the repository - The idea is to enable SonarQube static analysis on the project, so we can detect common mistakes or potential bugs in advance. See https://sonarcloud.io/about. Requires the repository owner to grant access to this repo. So only @angelozerr can do it. I'll look into enabling code coverage once we get sonarcloud.io up and running.
build
enable sonarcloud io on the repository the idea is to enable sonarqube static analysis on the project so we can detect common mistakes or potential bugs in advance see requires the repository owner to grant access to this repo so only angelozerr can do it i ll look into enabling code coverage once we get sonarcloud io up and running
56
8,629,700,796
IssuesEvent
2018-11-21 21:47:37
ca-cwds/design-system
https://api.github.com/repos/ca-cwds/design-system
opened
really really need CI machine
build help wanted tech debt
@denys-davydov slacked me over lunch with issues on the `0.5.0` release. The fault is mine. (I published without running build). Totally avoidable with a CI machine πŸ€• Quick fix, shipped a next patch. Capturing as issue here so we're transparent about this weakness
1
really really need CI machine - @denys-davydov slacked me over lunch with issues on the `0.5.0` release. The fault is mine. (I published without running build). Totally avoidable with a CI machine πŸ€• Quick fix, shipped a next patch. Capturing as issue here so we're transparent about this weakness
build
really really need ci machine denys davydov slacked me over lunch with issues on the release the fault is mine i published without running build totally avoidable with a ci machine πŸ€• quick fix shipped a next patch capturing as issue here so we re transparent about this weakness
57
8,656,914,283
IssuesEvent
2018-11-27 19:47:28
ca-cwds/design-system
https://api.github.com/repos/ca-cwds/design-system
closed
update deps to fix build-related flatmap-stream package issues
build tech debt
`npm-run-all` has a nested dep on the `flatmap-stream` package that was recently flagged as a security vulnerability. A patch release fixes. Update to fix build/security-vulnerability issues.
1
update deps to fix build-related flatmap-stream package issues - `npm-run-all` has a nested dep on the `flatmap-stream` package that was recently flagged as a security vulnerability. A patch release fixes. Update to fix build/security-vulnerability issues.
build
update deps to fix build related flatmap stream package issues npm run all has a nested dep on the flatmap stream package that was recently flagged as a security vulnerability a patch release fixes update to fix build security vulnerability issues
58
8,674,042,294
IssuesEvent
2018-11-30 05:44:35
eclipse/eclipse.jdt.ls
https://api.github.com/repos/eclipse/eclipse.jdt.ls
closed
BasicFileDetectorTest tests fail randomly
bug build/infra debt
Several tests (ReorgQuickFixTest, PrepareRenameHandler, RenameHandlerTest) set JavaLanguageServerPlugin.preferenceManager incorrectly which is the reason for BasicFileDetectorTest tests to fail. See https://ci.eclipse.org/ls/job/jdt-ls-pr/1159/ You can reproduce the issue by calling the following command more times: ``` mvn -Dtest=ReorgQuickFixTest,BasicFileDetectorTest clean verify ```
1
BasicFileDetectorTest tests fail randomly - Several tests (ReorgQuickFixTest, PrepareRenameHandler, RenameHandlerTest) set JavaLanguageServerPlugin.preferenceManager incorrectly which is the reason for BasicFileDetectorTest tests to fail. See https://ci.eclipse.org/ls/job/jdt-ls-pr/1159/ You can reproduce the issue by calling the following command more times: ``` mvn -Dtest=ReorgQuickFixTest,BasicFileDetectorTest clean verify ```
build
basicfiledetectortest tests fail randomly several tests reorgquickfixtest preparerenamehandler renamehandlertest set javalanguageserverplugin preferencemanager incorrectly which is the reason for basicfiledetectortest tests to fail see you can reproduce the issue by calling the following command more times mvn dtest reorgquickfixtest basicfiledetectortest clean verify
59
8,678,306,561
IssuesEvent
2018-11-30 19:29:54
angelozerr/lsp4xml
https://api.github.com/repos/angelozerr/lsp4xml
opened
Ensure no tests are skipped
build debt
Currently the maven build skips 12 tests: > [WARNING] Tests run: 361, Failures: 0, Errors: 0, Skipped: 12 This should be addressed. Either make them pass, or remove them
1
Ensure no tests are skipped - Currently the maven build skips 12 tests: > [WARNING] Tests run: 361, Failures: 0, Errors: 0, Skipped: 12 This should be addressed. Either make them pass, or remove them
build
ensure no tests are skipped currently the maven build skips tests tests run failures errors skipped this should be addressed either make them pass or remove them
60
8,687,130,941
IssuesEvent
2018-12-03 12:54:16
syndesisio/syndesis
https://api.github.com/repos/syndesisio/syndesis
closed
Since we build on CircleCI we no longer need our Jenkinsfiles
cat/build cat/techdebt size/s status/stale
We should remove them from the repository. 😒
1
Since we build on CircleCI we no longer need our Jenkinsfiles - We should remove them from the repository. 😒
build
since we build on circleci we no longer need our jenkinsfiles we should remove them from the repository 😒
61
8,697,182,809
IssuesEvent
2018-12-04 19:32:55
ca-cwds/design-system
https://api.github.com/repos/ca-cwds/design-system
opened
eslint/prettier batch apply --fix to MDX
build docs tech debt
#284 defines formatting rules for `js[x]` files. Our `mdx` content needs a batch apply of formatting so we don't see diffs like this in future commits: ```diff --- a/apps/www/src/modules/components/modules/Button/Button.mdx +++ b/apps/www/src/modules/components/modules/Button/Button.mdx @@ -1,7 +1,7 @@ -import { Button, Icon, IconButton, Card, CardBody } from '@cwds/components' -import DemoCard from '../../../../components/DemoCard' -import PropTable from '../../../../components/PropTable' -import Docgen from '../ReactDogenInfo.js' +import { Button, Icon, IconButton, Card, CardBody } from "@cwds/components"; +import DemoCard from "../../../../components/DemoCard"; +import PropTable from "../../../../components/PropTable"; +import Docgen from "../ReactDogenInfo.js"; ``` I'd like to see the same formatting rules for js[x] applied to mdx front-matter and code-fenced snippets.
1
eslint/prettier batch apply --fix to MDX - #284 defines formatting rules for `js[x]` files. Our `mdx` content needs a batch apply of formatting so we don't see diffs like this in future commits: ```diff --- a/apps/www/src/modules/components/modules/Button/Button.mdx +++ b/apps/www/src/modules/components/modules/Button/Button.mdx @@ -1,7 +1,7 @@ -import { Button, Icon, IconButton, Card, CardBody } from '@cwds/components' -import DemoCard from '../../../../components/DemoCard' -import PropTable from '../../../../components/PropTable' -import Docgen from '../ReactDogenInfo.js' +import { Button, Icon, IconButton, Card, CardBody } from "@cwds/components"; +import DemoCard from "../../../../components/DemoCard"; +import PropTable from "../../../../components/PropTable"; +import Docgen from "../ReactDogenInfo.js"; ``` I'd like to see the same formatting rules for js[x] applied to mdx front-matter and code-fenced snippets.
build
eslint prettier batch apply fix to mdx defines formatting rules for js files our mdx content needs a batch apply of formatting so we don t see diffs like this in future commits diff a apps www src modules components modules button button mdx b apps www src modules components modules button button mdx import button icon iconbutton card cardbody from cwds components import democard from components democard import proptable from components proptable import docgen from reactdogeninfo js import button icon iconbutton card cardbody from cwds components import democard from components democard import proptable from components proptable import docgen from reactdogeninfo js i d like to see the same formatting rules for js applied to mdx front matter and code fenced snippets
62
8,757,572,092
IssuesEvent
2018-12-14 21:45:28
eBay/skin
https://api.github.com/repos/eBay/skin
closed
Rebuild dist from scratch
aspect: build status: to do type: tech debt
When files are removed from src they could have obsolete artifacts remaining in dist. We should clear out and rebuild dist from scratch to clean up the repo.
1
Rebuild dist from scratch - When files are removed from src they could have obsolete artifacts remaining in dist. We should clear out and rebuild dist from scratch to clean up the repo.
build
rebuild dist from scratch when files are removed from src they could have obsolete artifacts remaining in dist we should clear out and rebuild dist from scratch to clean up the repo