output
stringlengths
1
18.7k
input
stringlengths
1
18.7k
awesome thanks!
yeah, that’s a topic of ongoing debate…perhaps the active tag should be applied to all tasks and workflows. or we should go with a more generalized solution where workflows and tasks can be custom tagged and labeled and then have methods like `fetch_tag`. The reason why active is specifically important for launch plans is because of schedules. The admin service needs to know to change the schedule configuration for a launch plan based on which one is active. anywho, to make your launch plan active: `pyflyte -p project -d domain -c flyte.config lp activate-all [--ignore-schedules]`
also, if you have an account, would you mind asking on stackoverflow and linking the question so I can answer there? We’d like to start making questions/concepts that seem common more easily searchable--and ideally have stackoverflow do the heavy lifting for us :p
awesome thanks!
hi Alex Pryiomka good question. We do have a roadmap, that we will publish soon. Its just that we need to look at resourcing and what the community wants i can share a rough draft in couple weeks, once i am back in office :slightly_smiling_face: i was on parental leave for all of january and last part of december. i am slated to join in 2 weeks
does Flyte has a roadmap of features and releases?
Congrats Ketan Umare on the baby aroval, i just came early January from my parental leave :smile:
hi Alex Pryiomka good question. We do have a roadmap, that we will publish soon. Its just that we need to look at resourcing and what the community wants i can share a rough draft in couple weeks, once i am back in office :slightly_smiling_face: i was on parental leave for all of january and last part of december. i am slated to join in 2 weeks
:slightly_smiling_face:
Congrats Ketan Umare on the baby aroval, i just came early January from my parental leave :smile:
Ketan Umare, it has been almost two weeks now, any update on the readmap? We would like to see what is comming in flyte :smile:
:slightly_smiling_face:
this is the last week of paternity, i am coming to Zillow we can discuss the roadmap let me share my aspirations list <https://docs.google.com/document/d/1yq8pIlhlG3gci3GJQNjdAd9bzZ-KYyLfm6I5NVms9-4/edit> I know this is a lot to digest but i will be trying to clean this up as one of the first thing, also some of these are already done
Ketan Umare, it has been almost two weeks now, any update on the readmap? We would like to see what is comming in flyte :smile:
As you probably know, Flyte is composed of several components (each with their own semantic versions). Each component is being developed in parallel and releasing new versions. The `lyft/flyte` github repo contains our aggregated "complete flyte" deployment configuration. In other words, we specify a semantic version for each component, combine them into a single "flyte version". That file can be seen here: <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml> We haven't set a cadence to the updating the "complete Flyte deploy". In other words, we do it periodically, without much formal reasoning right now. We should probably formalize that. For minor versions, the idea is that you should able to `kubectl apply -f theCompleteDeployFile.yaml` and get the updates without issue. You may have custom taylored deployments for your own use that are built on-top of this deployment (using something like `kustomize`. Our goal is that those too, should update without issue. LMK if that answers your question?
how does version upgrade / migration happen in flyte. Does it have version upgrade documentation? If we deploy v0.1.0 and v0.1.1 is released, how do we upgrade?
That is kubernetes resources that are more or less stateless. I am more concerned with postgreSQL schema migration If we migrate to a next version how can we migrate the existing workflows and projects Johnny Burns ^
As you probably know, Flyte is composed of several components (each with their own semantic versions). Each component is being developed in parallel and releasing new versions. The `lyft/flyte` github repo contains our aggregated "complete flyte" deployment configuration. In other words, we specify a semantic version for each component, combine them into a single "flyte version". That file can be seen here: <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml> We haven't set a cadence to the updating the "complete Flyte deploy". In other words, we do it periodically, without much formal reasoning right now. We should probably formalize that. For minor versions, the idea is that you should able to `kubectl apply -f theCompleteDeployFile.yaml` and get the updates without issue. You may have custom taylored deployments for your own use that are built on-top of this deployment (using something like `kustomize`. Our goal is that those too, should update without issue. LMK if that answers your question?
Ah, I understand. We have a container in the deployment which ensures the schema is up-to-date with the latest deployment (doing any necessary migrations). <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L1019-L1038> This is designed largely for migrations like adding a column, where missing data on the existing records isn't a big deal. If schema update is not backward compatible, I imagine that would require a major version update. _we're still pretty new to this, so we can probably re-think, improve, and formalize this process_
That is kubernetes resources that are more or less stateless. I am more concerned with postgreSQL schema migration If we migrate to a next version how can we migrate the existing workflows and projects Johnny Burns ^
Alex Pryiomka we use GORM as the ORM layer and as johnnynsaid we have a schema migrator that migrates any changes, we have done some work to allow writing more custom logical migrations Alex Pryiomka as for the entire platform, we are using a total semantic version alongwith semver for each component Interesting thing is we use protobuf and grpc which help in maintaining backwards compatibility as long as we are not stupid Since the platform has been use for a while we have done some breaking changes internally and figured how to do them painlessly, that being said someday we will break, but hopefully our versioning scheme will indicate that
Ah, I understand. We have a container in the deployment which ensures the schema is up-to-date with the latest deployment (doing any necessary migrations). <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L1019-L1038> This is designed largely for migrations like adding a column, where missing data on the existing records isn't a big deal. If schema update is not backward compatible, I imagine that would require a major version update. _we're still pretty new to this, so we can probably re-think, improve, and formalize this process_
That is a comprehensive answer, thanks Ketan Umare
Alex Pryiomka we use GORM as the ORM layer and as johnnynsaid we have a schema migrator that migrates any changes, we have done some work to allow writing more custom logical migrations Alex Pryiomka as for the entire platform, we are using a total semantic version alongwith semver for each component Interesting thing is we use protobuf and grpc which help in maintaining backwards compatibility as long as we are not stupid Since the platform has been use for a while we have done some breaking changes internally and figured how to do them painlessly, that being said someday we will break, but hopefully our versioning scheme will indicate that
cc Ally Gale short answer, no, not right now, but it shouldn’t be that hard to build workflows that do this based on existing constructs in Flyte (ie launchplans and dynamic tasks.)
Does flyte scheduler have a backfill functionality similar to Airflow?
Alex Pryiomka backfill like airflow Only works if we understand what time is. From Flyte a point of view time is just another input. We do not actually have a built in cron scheduler. We use the cloud schedulers Backfill thus just implies running an pipeline with an older Input. Interesting thing where we want to work on is, use some open source cron scheduler and have status world that you can in the Ui or through cli indicate re executions And also how to manage resources for backfills ( this is where we will innovate)
cc Ally Gale short answer, no, not right now, but it shouldn’t be that hard to build workflows that do this based on existing constructs in Flyte (ie launchplans and dynamic tasks.)
backfill usually assumes missed / failed runs and start date / beginning of your DAG. Say, i deploy a new version of a workflow with a bug fix. I would like do rerun previous runs to make sure the artifacts of the task outputs were corrected. Apparently the last thing i would want to do is to do it manually. If you have a start date on the workflow and the history of the executions based on the cron schedule, you should have no problems figuring out what to backfill. Ketan Umare ^ Backfills is actually one of the nicest things we like about airflow and use it all the time :slightly_smiling_face:
Alex Pryiomka backfill like airflow Only works if we understand what time is. From Flyte a point of view time is just another input. We do not actually have a built in cron scheduler. We use the cloud schedulers Backfill thus just implies running an pipeline with an older Input. Interesting thing where we want to work on is, use some open source cron scheduler and have status world that you can in the Ui or through cli indicate re executions And also how to manage resources for backfills ( this is where we will innovate)
I'll have to agree with Alex Pryiomka, and Scheduled flyte launch plans do understand time as a first class citizen, Ketan Umare. We have discussed this internally a few times, it's not surprising to know this is one of the frequent asks within Lyft as well. Is this something you would be willing to help spec/write up in the context of Flyte, Alex Pryiomka? we can definitely use help scoping the project and we will be happy to provide guidance in how to move forward with it...
backfill usually assumes missed / failed runs and start date / beginning of your DAG. Say, i deploy a new version of a workflow with a bug fix. I would like do rerun previous runs to make sure the artifacts of the task outputs were corrected. Apparently the last thing i would want to do is to do it manually. If you have a start date on the workflow and the history of the executions based on the cron schedule, you should have no problems figuring out what to backfill. Ketan Umare ^ Backfills is actually one of the nicest things we like about airflow and use it all the time :slightly_smiling_face:
Haytham Abuelfutuh I am not denying that backfill is good idea, I am just saying at the moment it can be achieved using external means. But scheduled launch plans is the perfect entity to have backfills on and not the workflows themselves This also Implies we need a scheduler to be built :blush: Or integrated with Alex Pryiomka as Haytham Abuelfutuh said we would love to collaborate on this and would love if you guys can help Hongxin Liang from Spotify have a component called Styx that could be leveraged as well
I'll have to agree with Alex Pryiomka, and Scheduled flyte launch plans do understand time as a first class citizen, Ketan Umare. We have discussed this internally a few times, it's not surprising to know this is one of the frequent asks within Lyft as well. Is this something you would be willing to help spec/write up in the context of Flyte, Alex Pryiomka? we can definitely use help scoping the project and we will be happy to provide guidance in how to move forward with it...
Ketan Umare, i like the way airflow does it. As far as the scheduled runs i think it does it pretty well except maybe the execution date piece - the date of the previous run - it should be just the scheduled datetime without any previous assumptions. Two things come to mind: • the scheduled workflow should have an optional start date that can be both in the past and in the future • for every missed run since the start date based on the current cron template the scheduler should run the workflows with the execution date provided. Example: today is 02/07/2020 23:00:00 UTC, say i deploy a new workflow with a start date that runs every 8 hours `0 0/8 * * *` and start on 02/06/2020 8:00:00 UTC. That means the missing runs would be 02/06/2020 8:00:00, 02/06/2020 16:00:00, 02/07/2020 00:00:00, 02/07/2020 08:00:00 and 02/07/2020 16:00:00. i would expect the scheduler to fill those up.
Haytham Abuelfutuh I am not denying that backfill is good idea, I am just saying at the moment it can be achieved using external means. But scheduled launch plans is the perfect entity to have backfills on and not the workflows themselves This also Implies we need a scheduler to be built :blush: Or integrated with Alex Pryiomka as Haytham Abuelfutuh said we would love to collaborate on this and would love if you guys can help Hongxin Liang from Spotify have a component called Styx that could be leveraged as well
Awesome, let’s write it down as a proposal, problem is how are failures handled. For example, we have start date of in the past. So when we deploy executions kick off. Lets say there is a bug that fails some executions (not all). Now a new deployment is made, what should be the behavior
Ketan Umare, i like the way airflow does it. As far as the scheduled runs i think it does it pretty well except maybe the execution date piece - the date of the previous run - it should be just the scheduled datetime without any previous assumptions. Two things come to mind: • the scheduled workflow should have an optional start date that can be both in the past and in the future • for every missed run since the start date based on the current cron template the scheduler should run the workflows with the execution date provided. Example: today is 02/07/2020 23:00:00 UTC, say i deploy a new workflow with a start date that runs every 8 hours `0 0/8 * * *` and start on 02/06/2020 8:00:00 UTC. That means the missing runs would be 02/06/2020 8:00:00, 02/06/2020 16:00:00, 02/07/2020 00:00:00, 02/07/2020 08:00:00 and 02/07/2020 16:00:00. i would expect the scheduler to fill those up.
it is a good question. It is easier in airflow since each DAG is unique by name and when you redeploy you effectively overwrite the existing DAG where as in flyte each DAG is versioned. How does currently flyte manages schedule for multiple versions of the workflows. If i say have a workflow `abc:1.23` running once a day like so: `0 0 * * *` , then i deploy `abc:1.24` with the same schedule, do i end up running two workflows now? or the new version effectively cancels the previous one? I would say the later deployed workflow should implicitly cancel the currently running one. For the failed executions you do not backfill unless you go and manually delete them. Once deleted, the scheduler should refill them automatically. It is easier in aiflow since each DAG / worfkow is unique. It gets tricky when you need to have multiple versions of the same workflow / DAG
Awesome, let’s write it down as a proposal, problem is how are failures handled. For example, we have start date of in the past. So when we deploy executions kick off. Lets say there is a bug that fails some executions (not all). Now a new deployment is made, what should be the behavior
Flyte handles it as you say: the newest version of the launch plan with the same name takes over and cancels the previous one. If the schedule cadence changes, it is changed going forward.
it is a good question. It is easier in airflow since each DAG is unique by name and when you redeploy you effectively overwrite the existing DAG where as in flyte each DAG is versioned. How does currently flyte manages schedule for multiple versions of the workflows. If i say have a workflow `abc:1.23` running once a day like so: `0 0 * * *` , then i deploy `abc:1.24` with the same schedule, do i end up running two workflows now? or the new version effectively cancels the previous one? I would say the later deployed workflow should implicitly cancel the currently running one. For the failed executions you do not backfill unless you go and manually delete them. Once deleted, the scheduler should refill them automatically. It is easier in aiflow since each DAG / worfkow is unique. It gets tricky when you need to have multiple versions of the same workflow / DAG
Alex Pryiomka yes as Matt says, important to note - same named LaunchPlan. As previously noted launchplan is the scheduled entity, and for backfill as haytham suggested it souls be a great place to house it
Flyte handles it as you say: the newest version of the launch plan with the same name takes over and cancels the previous one. If the schedule cadence changes, it is changed going forward.
Sorry for the trouble Ruslan Stanevich. My understanding is that: • You have a custom flytepropeller config • You ran kustomize, which bumped the version of flytepropeller. • That version of flytepropeller was not compatible with your custom flytepropeller config. Let me know if I'm mis-understanding what happened. If that is the case, I would consider our FlytePropeller change an oversight. We strive to not make changes that are not backward compatible (this includes being backward compatible with respect to configs). The new propeller version should have been compatible with your config. It seems we might need some more process to make sure that type of change doesn't happen (I'll look into that).
Hi Everyone! Thank you Johnny Burns for your advice about secret management in Flyte workflows. It was very helpful :pray: Today I’d like to ask about Flyte deployment “best practice”. Basically, we configured own overlays with some patches based on this article <https://lyft.github.io/flyte/administrator/install/production.html> And we refer to remote flyte base repo. Some changes in remote base repo required changes in our overlay too. E.g. <https://github.com/lyft/flyte/pull/164/commits/387228bb124b48a513b1b959b24c3057c0980926> Needs to remove quboleLimit in overlay config. Else propeller would not run. Yes, additional reviewing diff and tests, refering to the release tags in kustomize or having own repo with flyte base solve some possible issues .. Considering regular “sync” with Flyte Base and stable releases what is your recommended approach? Thank you in advance!
they should be getting deleted, are they not? can you do a kubectl get of the crd in the namespace and paste the output here?
Hello :raised_hand_with_fingers_splayed: Each time when Flyte `spark_task` workflow runs in k8s it creates `<http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io>` resourse with name like `{{executorID}}-{{taskName}}-{{workerNo}}` According to <https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#deleting-a-sparkapplication> we should delete these custom resourses with `kubectl delete ...` Just interesting if there is a recommended way to `automatically delete` these `resources` from k8s automatically? maybe based on completition status or smth else? if it makes sense of course. Thanks!
sure! sorry cannot paste everything here. total it is 434 rows ```✘  kubectl get <http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io> --all-namespaces NAMESPACE NAME AGE dwh-s3-sync-staging s3-sync-segment-1581614830274559189 4h7m dwh-s3-sync-staging s3-sync-segment-1581618448285298411 3h6m ... place-search-workflows-development zuxdkx4b46-job-result-0 15d place-search-workflows-development zy8z1p1w6q-run-apply-changes-0 3d7h ... pyspark-example-development wr0vxws9rf-w2c-result-0 29d pyspark-example-development yos1nmr86f-w2c-result-0 41d pyspark-word2vec-example-development ad28i5oh50-w2c-result-0 3d7h pyspark-word2vec-example-development at27f5ccvl-w2c-result-0 3d6h```
they should be getting deleted, are they not? can you do a kubectl get of the crd in the namespace and paste the output here?
Anmol Khurana can you think of any reason why these wouldn’t get removed? also, can you track down the corresponding flyte workflow crd instance for one of them? when the parent flyte workflow crd is finished, the child resources should get reaped
sure! sorry cannot paste everything here. total it is 434 rows ```✘  kubectl get <http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io> --all-namespaces NAMESPACE NAME AGE dwh-s3-sync-staging s3-sync-segment-1581614830274559189 4h7m dwh-s3-sync-staging s3-sync-segment-1581618448285298411 3h6m ... place-search-workflows-development zuxdkx4b46-job-result-0 15d place-search-workflows-development zy8z1p1w6q-run-apply-changes-0 3d7h ... pyspark-example-development wr0vxws9rf-w2c-result-0 29d pyspark-example-development yos1nmr86f-w2c-result-0 41d pyspark-word2vec-example-development ad28i5oh50-w2c-result-0 3d7h pyspark-word2vec-example-development at27f5ccvl-w2c-result-0 3d6h```
Ruslan Stanevich it will hang around till the workflow hangs around once the workflow is deleted it should get auto-deleted we keep all resources around till the workflow is around and the workflow gets GC’ed every few hours (or it may be disabled locally)
Anmol Khurana can you think of any reason why these wouldn’t get removed? also, can you track down the corresponding flyte workflow crd instance for one of them? when the parent flyte workflow crd is finished, the child resources should get reaped
do you mean deleting completed workflow using <https://github.com/lyft/flytepropeller#deleting-workflows> `kubectl-flyte delete --namespace {{ namespace }} --all-completed` thank you!
Ruslan Stanevich it will hang around till the workflow hangs around once the workflow is deleted it should get auto-deleted we keep all resources around till the workflow is around and the workflow gets GC’ed every few hours (or it may be disabled locally)
You should not need to do that, there is a garbage collection system that deletes completed workflows based on configuration
do you mean deleting completed workflow using <https://github.com/lyft/flytepropeller#deleting-workflows> `kubectl-flyte delete --namespace {{ namespace }} --all-completed` thank you!
Is this GC configured in Propeller? just having “default sandbox” configuration we’ve got 450+ completed workflows (the oldest is finished 41days ago ) Example for one namespace ```kubectl-flyte get --namespace mapmaking-workflows-development Listing workflows in [mapmaking-workflows-development] .............................................................................................................................. Found 127 workflows Success: 24, Failed: 103, Running: 0, Waiting: 0 | Namespace| Total|Success| Failed|Running|Waiting| QuotasUsage| | mapmaking-workflows-development| 127| 24| 103| 0| 0| -|``` sorry if I misunderstood smth :slightly_smiling_face: And (127 rows) ```kubectl get <http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io> --namespace mapmaking-workflows-development NAME AGE a06a9vmgwa-run-aggregate-errors-0 18d a06a9vmgwa-run-detect-errors-0 18d ... a5o1wvtf9m-run-aggregate-errors-0 29h a5o1wvtf9m-run-detect-errors-0 32h```
You should not need to do that, there is a garbage collection system that deletes completed workflows based on configuration
No worries I am not explaining well, I should point you to the gc configuration. Also it is ok to delete workflows that are completed Once I am near a computer will send a link to gc config
Is this GC configured in Propeller? just having “default sandbox” configuration we’ve got 450+ completed workflows (the oldest is finished 41days ago ) Example for one namespace ```kubectl-flyte get --namespace mapmaking-workflows-development Listing workflows in [mapmaking-workflows-development] .............................................................................................................................. Found 127 workflows Success: 24, Failed: 103, Running: 0, Waiting: 0 | Namespace| Total|Success| Failed|Running|Waiting| QuotasUsage| | mapmaking-workflows-development| 127| 24| 103| 0| 0| -|``` sorry if I misunderstood smth :slightly_smiling_face: And (127 rows) ```kubectl get <http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io> --namespace mapmaking-workflows-development NAME AGE a06a9vmgwa-run-aggregate-errors-0 18d a06a9vmgwa-run-detect-errors-0 18d ... a5o1wvtf9m-run-aggregate-errors-0 29h a5o1wvtf9m-run-detect-errors-0 32h```
Oh, thanks a lot :pray:
No worries I am not explaining well, I should point you to the gc configuration. Also it is ok to delete workflows that are completed Once I am near a computer will send a link to gc config
<https://github.com/lyft/flytepropeller/blob/master/config.yaml#L12>
Oh, thanks a lot :pray:
they should work with minikube provided you have an ingress to the k8s web console. generally, this is set up via port forwarding so it is just something like: <https://github.com/lyft/flyte/blob/5df4e997306f6829836845851ae4fcb82dab151b/kustomize/overlays/test/propeller/plugins/config.yaml#L4>
hi guys quick question, are the kubernetes logs urls supposed to work with minikube/baremetal or do they only work with cloud providers? I can’t figure out what to put in the “kubernetes-url:” in my deployment yaml…
Hi Matt, when you say “web console” you are talking about the kubernetes dashboard correct?
they should work with minikube provided you have an ingress to the k8s web console. generally, this is set up via port forwarding so it is just something like: <https://github.com/lyft/flyte/blob/5df4e997306f6829836845851ae4fcb82dab151b/kustomize/overlays/test/propeller/plugins/config.yaml#L4>
yes that’s correct, the k8s dashboard. it didn’t make its way into the public repo however. there’s nothing lyft-specific in there, we just didn’t feel like it was clean enough to include, esp since it’s mostly copied from the public version of the same thing
Hi Matt, when you say “web console” you are talking about the kubernetes dashboard correct?
ya, so Giordano, we do not collect and store logs at all, we just point you to logs somewhere else, llike on cloud providers to their logs services and for bare k8s to K8s dashboard. Its easy to add new services like splunk, data dog etc
yes that’s correct, the k8s dashboard. it didn’t make its way into the public repo however. there’s nothing lyft-specific in there, we just didn’t feel like it was clean enough to include, esp since it’s mostly copied from the public version of the same thing
I haven't tried NLB personally, but I imagine the answer is no. I'll save you some time and tell you that for sure neither ALB nor ELB can handle grpc SSL termination because: ALB downgrades all connections to http1 at the load balancer. This won't work as gRPC needs http2. ELB does not understand http2. The reason I _think_ NLB won't work is because NLB's are an L4 device. http2 TLS requires something called ALPN (Application Layer Protocol Negotiation). As the name suggests, this happens at L7 (application layer), so the NLB (L4) device is incapable of speaking that language. For my personal Flyte installation, I put the ELB in "passthrough" (L3) mode, and handle TLS certs beyond the ELB (via envoy / nginx). It's not ideal but it works. I think NLB supports passthrough mode as well, fwiw.
Hi Everyone! Short question related to SSL termination in Flyteadmin for gRPC traffic (registering workflow for example) if I configure SSL here <https://github.com/lyft/flyteadmin/blob/6a64f00315f8ffeb0472ae96cbc2031b338c5840/flyteadmin_config.yaml#L9-L13> will AWS Network LB with TLS listener handle it correctly? or what is your recommendations? Thanks!
thank you Johnny for saving my time! :slightly_smiling_face: yes, it works well both with classic elb and nlb in “passthrough” mode. And you advice will help to care about handling TLS for gRPC :slightly_smiling_face:
I haven't tried NLB personally, but I imagine the answer is no. I'll save you some time and tell you that for sure neither ALB nor ELB can handle grpc SSL termination because: ALB downgrades all connections to http1 at the load balancer. This won't work as gRPC needs http2. ELB does not understand http2. The reason I _think_ NLB won't work is because NLB's are an L4 device. http2 TLS requires something called ALPN (Application Layer Protocol Negotiation). As the name suggests, this happens at L7 (application layer), so the NLB (L4) device is incapable of speaking that language. For my personal Flyte installation, I put the ELB in "passthrough" (L3) mode, and handle TLS certs beyond the ELB (via envoy / nginx). It's not ideal but it works. I think NLB supports passthrough mode as well, fwiw.
yeah just to chime in as well. At lyft, the production installation of flyte admin also does not run over ssl. All ssl is terminated by envoy. the code there was built in as an alternative, but I imagine most people will want to handle ssl at the nginx layer
thank you Johnny for saving my time! :slightly_smiling_face: yes, it works well both with classic elb and nlb in “passthrough” mode. And you advice will help to care about handling TLS for gRPC :slightly_smiling_face:
Currently dynamic_task is the only supported way to achieve this. We do however have branch nodes defined in the Flyte Spec Language but is not yet exposed in the Python SDK <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/core/workflow.proto#L40-L45>
Hi guys quick question: let’s say I have a workflow with 2 tasks, would it be possible to pick one of them based on one of the the inputs provided to the workflow? I know that the input objects can be passed to the task but I can’t figure out a way to grab the actual value that gets sent during execution to compare it against an if statement…something similar to this: ```@workflow_class class WF_train_hyperopt_yolo_experiment(object): gpu = Input(Types.Boolean, required=True, help="use gpu") if gpu == True: task = run_task_1() else: task = run_task_2()``` my use case would be to have a single workflow that could spin up a deep learning task either using GPUs or just CPUs instead of doing 2 different workflows update: I was able to do what I wanted with a dynamic task, is that the preferred method or is there a better way?
I think you are correct, AFAIK. This is a feature that would be really good to have though. If you have any interest in contributing this feature I'm happy to help.
Hello Everyone :hand: <https://github.com/lyft/flyte/issues/36> The question is about this feature request: is there any way (maybe api call) for removing registered workflow from Flyte? As I see, `pyflyte` has no such command and `kubectl-flyte` deletes workflow as k8s resource. Sorry if I am incorrect in smth :slightly_smiling_face: thanks!
Hey Eduardo Giraldo When Flyte "registers" a workflow, it stores a textual representation of the workflow Task A =&gt; Task B =&gt; Task C Each of those tasks represents a container to be run, so flyte needs to know which container "task A" represents. We typically do that with environment vars: <https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile#L33>
Hello everyone, one question where i can get more info about how to run a workflow as a test on my local? Every time i try to run this command: ```docker run --network host -e FLYTE_PLATFORM_URL='127.0.0.1:30081' {{ your docker image }} pyflyte -p myflyteproject -d development -c sandbox.config register workflows``` It says: `Exception: Could not parse image version from configuration. Did you set it in theDockerfile?`
Hello Johnny Burns thanks for the answer, actually im triying to run my test but it does not allow the image i have, first it said it was not latest then appears this error. So im confused how it works. This is my actual workspace And as you can see im running my image, that one i build it with docker with that name
Hey Eduardo Giraldo When Flyte "registers" a workflow, it stores a textual representation of the workflow Task A =&gt; Task B =&gt; Task C Each of those tasks represents a container to be run, so flyte needs to know which container "task A" represents. We typically do that with environment vars: <https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile#L33>
Eduardo Giraldo is your image literally called "latest", or is that your image _version_ ? I think you need "imagename:latest"? We build ours here: <https://github.com/lyft/flytesnacks/blob/master/python/scripts/docker_build.sh#L28>
Hello Johnny Burns thanks for the answer, actually im triying to run my test but it does not allow the image i have, first it said it was not latest then appears this error. So im confused how it works. This is my actual workspace And as you can see im running my image, that one i build it with docker with that name
Johnny Burns i checked and is the latest one, also i build it again after prune the system but it shows the same error T_T As you can see is the latest one
Eduardo Giraldo is your image literally called "latest", or is that your image _version_ ? I think you need "imagename:latest"? We build ours here: <https://github.com/lyft/flytesnacks/blob/master/python/scripts/docker_build.sh#L28>
Eduardo Giraldo Sorry I probably didn't explain well. I think you need to change your Dockerfile. Change `FLYTE_INTERNAL_IMAGE` from: "latest" to "flyte_test:latest" (unless you already did that)
Johnny Burns i checked and is the latest one, also i build it again after prune the system but it shows the same error T_T As you can see is the latest one
I already test that but it has the same result :disappointed:
Eduardo Giraldo Sorry I probably didn't explain well. I think you need to change your Dockerfile. Change `FLYTE_INTERNAL_IMAGE` from: "latest" to "flyte_test:latest" (unless you already did that)
Is the error `Exception: Could not parse image version from configuration. Did you set it in theDockerfile?` ?
I already test that but it has the same result :disappointed:
Yes sir :smiley:
Is the error `Exception: Could not parse image version from configuration. Did you set it in theDockerfile?` ?
If so, can you `docker run -it flyte_test:latest` and call `echo $FLYTE_INTERNAL_IMAGE`
Yes sir :smiley:
It has other value, i build it again and now is running Thank you so much, you rock :smiley:
If so, can you `docker run -it flyte_test:latest` and call `echo $FLYTE_INTERNAL_IMAGE`
None that I know for sure (since it's open-source, one could do so and not say so). You could almost count "L5" which is owned by Lyft, but runs like a separate entity (they manage their own Flyte clusters). Spotify is using it but unsure of the capacity (Hongxin Liang could tell you more)
does anyone other than lyft uses flyte in production?
We are still in experimenting phase and not in production.
None that I know for sure (since it's open-source, one could do so and not say so). You could almost count "L5" which is owned by Lyft, but runs like a separate entity (they manage their own Flyte clusters). Spotify is using it but unsure of the capacity (Hongxin Liang could tell you more)
Hongxin Liang is it Flyte vs the status-quo or are considering multiple alternatives to the status-quo, like Prefect or Metaflow?
We are still in experimenting phase and not in production.
The former case as you described. Jonathon Belotti
Hongxin Liang is it Flyte vs the status-quo or are considering multiple alternatives to the status-quo, like Prefect or Metaflow?
Alex Pryiomka almost every company that we know off is experimenting with Flyte. I guess that’s what happens with infancy
The former case as you described. Jonathon Belotti
Hongxin Liang from blog posts I gather that status-quo is Kubeflow?
Alex Pryiomka almost every company that we know off is experimenting with Flyte. I guess that’s what happens with infancy
Jonathon Belotti actually it seems Spotify has a legacy infra for data infra which is the team Hongxin Liang works on. There is a team in nyc that is experimenting with kubeflow for Ml Please correct me if I am wrong @honnix Jonathon Belotti are you looking into Flyte for a specific reason, company or personal interest
Hongxin Liang from blog posts I gather that status-quo is Kubeflow?
Ahh thanks for clarification. I was looking at <https://labs.spotify.com/2019/12/13/the-winding-road-to-better-machine-learning-infrastructure-through-tensorflow-extended-and-kubeflow/>
Jonathon Belotti actually it seems Spotify has a legacy infra for data infra which is the team Hongxin Liang works on. There is a team in nyc that is experimenting with kubeflow for Ml Please correct me if I am wrong @honnix Jonathon Belotti are you looking into Flyte for a specific reason, company or personal interest
My name is Ketan and I would love to understand your usecases Ya I do see an eventual convergence
Ahh thanks for clarification. I was looking at <https://labs.spotify.com/2019/12/13/the-winding-road-to-better-machine-learning-infrastructure-through-tensorflow-extended-and-kubeflow/>
Ketan Umare I work at Canva and we run Argo workflows. I’m the owner of that system and I’m not that thrilled with it. I’ve had a short, interesting chat with Haytham here about how he sees the Argo vs. Flyte match-up, and it was convincing enough to keep me looking at Flyte. Right now I’m studying Flyte’s design for learning. We can’t justify migration from Argo but hoping to take some lessons across.
My name is Ketan and I would love to understand your usecases Ya I do see an eventual convergence
Absolutely Also we would love to get your thoughts when we you finalize them
Ketan Umare I work at Canva and we run Argo workflows. I’m the owner of that system and I’m not that thrilled with it. I’ve had a short, interesting chat with Haytham here about how he sees the Argo vs. Flyte match-up, and it was convincing enough to keep me looking at Flyte. Right now I’m studying Flyte’s design for learning. We can’t justify migration from Argo but hoping to take some lessons across.
:+1:
Absolutely Also we would love to get your thoughts when we you finalize them
With Spotify we are trying to do something interesting, compile their Luigi pipelines directly to Flyte, we would like to know if you are open to such an exploration for Argo
:+1:
Wow, sounds ambitious. I think I want to spend more time understanding the key tradeoffs between Argo and Flyte, while also iterating in other areas of our workflow system (eg. DAG SDK, security model) to see if improvements there offer higher ROI.
With Spotify we are trying to do something interesting, compile their Luigi pipelines directly to Flyte, we would like to know if you are open to such an exploration for Argo
Absolutely
Wow, sounds ambitious. I think I want to spend more time understanding the key tradeoffs between Argo and Flyte, while also iterating in other areas of our workflow system (eg. DAG SDK, security model) to see if improvements there offer higher ROI.
Jonathon Belotti would love to hear you experience with Argo particularly the rough edges. It's something we might evaluate.
Absolutely
Oliver Mannion • We’re pretty disillusioned with Argo’s YAML templating approach. We got along OK using Jsonnet to spit out the YAML, but we don’t think it’s better than a Python SDK for building workflows, and our Data Scientists really have not warmed to writing Jsonnet. There’s a Python DSL for Argo now, but we’re not on it (yet) and haven’t assessed it’s quality. • Bugs. In the 9 months I’ve been working with Argo, there’s been at least 3 or 4 bugs that have made it into a release that have created downtime or broken a helpful feature. This recent regression meant we shipped some broken dags to our clusters that we’d normally catch in CI -&gt; <https://github.com/argoproj/argo/issues/2313> • Lack of types. Argo’s basically stringly-typed and that’s sucked. It’s `Parameter` object in Golang is `key String, value String` . Not infrequently we find it’d be great to have `Parameter` values have types. • Task caching is not well supported in Argo, I think because the dataflow graph wasn’t a big focus for them. Argo’s DAGs describe the execution of containers and not the flow of data artifacts. Flyte has made task caching a first-class feature. • Storing history of DAGs wasn’t possible until a recent release. There’s more I could say and I haven’t covered the positives but I’ve got to get into a call.
Jonathon Belotti would love to hear you experience with Argo particularly the rough edges. It's something we might evaluate.
Oliver Mannion Jonathon Belotti I would love to get on a call with you guys and show how we are going to move forward, also hear about your feedback As said we are still a small community, but we are focused on making this work at scale as we actually deploy this everyday to Lyft (just like we did with Envoy) All the work we are doing is to ensure that data on kubernetes is a reality and I am going to start doing biweekly calls, more open roadmap and release trains
Oliver Mannion • We’re pretty disillusioned with Argo’s YAML templating approach. We got along OK using Jsonnet to spit out the YAML, but we don’t think it’s better than a Python SDK for building workflows, and our Data Scientists really have not warmed to writing Jsonnet. There’s a Python DSL for Argo now, but we’re not on it (yet) and haven’t assessed it’s quality. • Bugs. In the 9 months I’ve been working with Argo, there’s been at least 3 or 4 bugs that have made it into a release that have created downtime or broken a helpful feature. This recent regression meant we shipped some broken dags to our clusters that we’d normally catch in CI -&gt; <https://github.com/argoproj/argo/issues/2313> • Lack of types. Argo’s basically stringly-typed and that’s sucked. It’s `Parameter` object in Golang is `key String, value String` . Not infrequently we find it’d be great to have `Parameter` values have types. • Task caching is not well supported in Argo, I think because the dataflow graph wasn’t a big focus for them. Argo’s DAGs describe the execution of containers and not the flow of data artifacts. Flyte has made task caching a first-class feature. • Storing history of DAGs wasn’t possible until a recent release. There’s more I could say and I haven’t covered the positives but I’ve got to get into a call.
I’d be happy to, but I’d want to take more time to test-drive Flyte at work so I can give better feedback. Could do meeting beyond say… Monday week.
Oliver Mannion Jonathon Belotti I would love to get on a call with you guys and show how we are going to move forward, also hear about your feedback As said we are still a small community, but we are focused on making this work at scale as we actually deploy this everyday to Lyft (just like we did with Envoy) All the work we are doing is to ensure that data on kubernetes is a reality and I am going to start doing biweekly calls, more open roadmap and release trains
ohh thats fine, even before you start test driving, we could just share our usecases and future roadmap great, thank you also do keep a look out at our issues
I’d be happy to, but I’d want to take more time to test-drive Flyte at work so I can give better feedback. Could do meeting beyond say… Monday week.
Thanks so much Jonathon, that's super helpful feedback. I'd be happy to jump on a call. Although I'm also very early on in evaluating Flyte and haven't given it a good test drive yet.
ohh thats fine, even before you start test driving, we could just share our usecases and future roadmap great, thank you also do keep a look out at our issues
Oliver Mannion / Jonathon Belotti as said earlier, I would like to start a conversation with you guys. :slightly_smiling_face:
Thanks so much Jonathon, that's super helpful feedback. I'd be happy to jump on a call. Although I'm also very early on in evaluating Flyte and haven't given it a good test drive yet.
No worries, if we can agree on a time that crosses between our timezones (i’m in Sydney, Australia) I can get on a call.
Oliver Mannion / Jonathon Belotti as said earlier, I would like to start a conversation with you guys. :slightly_smiling_face:
I am in PST (Seattle USA)
No worries, if we can agree on a time that crosses between our timezones (i’m in Sydney, Australia) I can get on a call.
I'm in Melbourne, Australia same timezone as Sydney :slightly_smiling_face: Jonathon Belotti just wondering, do you ever evaluate airflow?
I am in PST (Seattle USA)
the team did, just prior to my arrival
I'm in Melbourne, Australia same timezone as Sydney :slightly_smiling_face: Jonathon Belotti just wondering, do you ever evaluate airflow?
Can you share what command are you trying to run?
Hello Guys, Im here triying to make my own module to work with flyte, but always that i want to register the new flow it answers me this : `ModuleNotFoundError`. Im doing something wrong?
and can you describe what you’re trying to do. and can you please `echo $PYTHONPATH`?
Can you share what command are you trying to run?
I might know what your issue is You have "workflow_packages" set to `flyte`. This means the system expects your workflow to live in the `flyte` module. You need a directory `flyte` (which I see you have one). It needs to have an `__init__.py` , any chance that is missing?
and can you describe what you’re trying to do. and can you please `echo $PYTHONPATH`?
In fact you're right it was a problem with the path, i already solve it but now every time i ran my task with something it answers this: ´Retries [0/1], task failed, TaskFailedUnknownError: Container/Pod failed. No message received from kubernetes. Could be permissions?´ I try to run this short example but it does not allow it :'(
I might know what your issue is You have "workflow_packages" set to `flyte`. This means the system expects your workflow to live in the `flyte` module. You need a directory `flyte` (which I see you have one). It needs to have an `__init__.py` , any chance that is missing?
Sorry about that confusing error. can you try to `kubectl get pods -n {yourproject}-development` ? Then `kubectl logs -n {yourproject}-deevelopment {yourpod}`
In fact you're right it was a problem with the path, i already solve it but now every time i ran my task with something it answers this: ´Retries [0/1], task failed, TaskFailedUnknownError: Container/Pod failed. No message received from kubernetes. Could be permissions?´ I try to run this short example but it does not allow it :'(
Hello Johnny first i want to thanks all you're always here to help :smiley:. Im running it locally with minikube, when I put kubectl get pods -n flyte it shows me this: But when i try to put the logs it does not found the pod =(
Sorry about that confusing error. can you try to `kubectl get pods -n {yourproject}-development` ? Then `kubectl logs -n {yourproject}-deevelopment {yourpod}`
Eduardo Giraldo `flyteadmin` isn't the pod you're looking for I think. You want the pod which ran your task. your pod ran in the namespace `yourproject-yourdomain` yourdomain is probably "development" alternatively, you can just do `kubectl get pods --all-namespaces` to see every pod in every namespace (your pod should be included)
Hello Johnny first i want to thanks all you're always here to help :smiley:. Im running it locally with minikube, when I put kubectl get pods -n flyte it shows me this: But when i try to put the logs it does not found the pod =(
If the `lyft/protocgenerator` code is open-source that'd be really cool I'm beginning to using the docker image a lot xD
I will move the code there
Do you mind running `make kustomize` to update the generated files?
Hi, can I get a +1 for this PR? <https://github.com/lyft/flyte/pull/184> Thanks.
Thanks for minding, fixed it. wdyt of <https://github.com/lyft/flyte/pull/200> I got an approval already from Yee Lightening fast!
Do you mind running `make kustomize` to update the generated files?
yeah i like these github workflow things. trying to use them more across all our repos in place of travis.
Thanks for minding, fixed it. wdyt of <https://github.com/lyft/flyte/pull/200> I got an approval already from Yee Lightening fast!
seems production ready. one thing as I just commented, i didn’t know force push would remove previous workflow executions. _something worth knowing i think_
yeah i like these github workflow things. trying to use them more across all our repos in place of travis.
just force push on the topic branch right?
seems production ready. one thing as I just commented, i didn’t know force push would remove previous workflow executions. _something worth knowing i think_
yeah
just force push on the topic branch right?
yeah, all good. as long as it’s not master we don’t care
yeah
scary that should be disabled.
yeah, all good. as long as it’s not master we don’t care
So for #1, you'd usually use kubernetes imagepullsecrets <https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/> For #2, you say "setting up a custom bucket". Do you mean creating a custom bucket? Or configuring your Flyte workflow to use a custom bucket that has already been created?
Hello guys, I have a couple of questions about config: 1. Is there a way to setup the user and password to download a private docker image or use only a local generated Image on Local Sandbox? 2. How can i setup my own s3 bucket on Local Sandbox?
Hello Johnny Burns, I want to work with a S3 bucket in my AWS, i already create it but I don't figure it out who to config the sandbox to use my credentials and connect into the bucket
So for #1, you'd usually use kubernetes imagepullsecrets <https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/> For #2, you say "setting up a custom bucket". Do you mean creating a custom bucket? Or configuring your Flyte workflow to use a custom bucket that has already been created?
Ah, yeah, great question. So you can configure which s3 bucket to use in the SDK configs: <https://github.com/lyft/flytesnacks/blob/master/python/sandbox.config#L13> As for permissions to upload/download from that bucket, there are several ways to do that. The first way is using AWS "roles". Since you're running locally, you probably want username/password though, I'm guessing. You can configure that in the SDK configs here: <https://github.com/lyft/flytesnacks/blob/a6ec170713efa76aa392e5d7648c5765012d010c/python/sandbox.config#L15-L17>
Hello Johnny Burns, I want to work with a S3 bucket in my AWS, i already create it but I don't figure it out who to config the sandbox to use my credentials and connect into the bucket
ok Johnny I check it out. Thanks for the info, as always you rock :smiley:
Ah, yeah, great question. So you can configure which s3 bucket to use in the SDK configs: <https://github.com/lyft/flytesnacks/blob/master/python/sandbox.config#L13> As for permissions to upload/download from that bucket, there are several ways to do that. The first way is using AWS "roles". Since you're running locally, you probably want username/password though, I'm guessing. You can configure that in the SDK configs here: <https://github.com/lyft/flytesnacks/blob/a6ec170713efa76aa392e5d7648c5765012d010c/python/sandbox.config#L15-L17>
Spark 2.4.x doesn't natively support GPUs and GPU support in the operator needs a web-hook to be enabled. Spark 3.0 will support GPUs natively which will simplify some of this set-up. Coming to Flyte, we again don't currently support this : <https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/k8s/spark/spark.go#L153> and would need some changes in flytekit/flyteplugins to pass gpu config through to the spark-operator. Let me cut an issue so that we can prioritize/track this. Filed <https://github.com/lyft/flyte/issues/224> to track this. Feel free to add any details I missed Ruslan Stanevich Ketan Umare
Hello Everyone! I have a question about running `SPARK tasks`. Is there way to run their executors on the nodes with `GPU`? Yes, we’ve managed to do this for python Sidecar tasks. As I know for sparkoperator we should specify GPU request in `<http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io>` CRD. And if is it possible could you please advice how to specify this from python code? Thank you in advance! <https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#requesting-gpu-resources>
Ruslan Stanevich want to help us launch this in open source? contributions welcome
Spark 2.4.x doesn't natively support GPUs and GPU support in the operator needs a web-hook to be enabled. Spark 3.0 will support GPUs natively which will simplify some of this set-up. Coming to Flyte, we again don't currently support this : <https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/k8s/spark/spark.go#L153> and would need some changes in flytekit/flyteplugins to pass gpu config through to the spark-operator. Let me cut an issue so that we can prioritize/track this. Filed <https://github.com/lyft/flyte/issues/224> to track this. Feel free to add any details I missed Ruslan Stanevich Ketan Umare