output
stringlengths 1
18.7k
| input
stringlengths 1
18.7k
|
---|---|
yes, ray tune should work, it could test it locally first.
Did you use ray 2.0? I got the same error, but fixed it by upgrading ray.
btw, the type of `result` isn’t int. it’s `ResultGrid`.
| The above error mentioned is got when i ran locally using '*python example.py'*. when i executed using pyflyte run command i am getting this error.
after removing the distributed config also am getting same error.
|
What is the method to specify the result type as `ResultGrid`.
```@workflow
def ray_workflow(n: int) -> ResultGrid:
return ray_task(n=n)```
Is this the way?
| yes, ray tune should work, it could test it locally first.
Did you use ray 2.0? I got the same error, but fixed it by upgrading ray.
btw, the type of `result` isn’t int. it’s `ResultGrid`.
|
yes, flytekit will serialize it to pickle by default, but you could register new type transformer to serialize it to protobuf.
<https://docs.flyte.org/projects/cookbook/en/latest/auto/core/extend_flyte/custom_types.html>
| What is the method to specify the result type as `ResultGrid`.
```@workflow
def ray_workflow(n: int) -> ResultGrid:
return ray_task(n=n)```
Is this the way?
|
have upgraded the ray version. it is 2.1.0 now. but still i am getting this error when i mention the result type as ResultGrid. is it compulsory to register new type transformer? is the error caused bcause of that? now the ray cluster is getting initiated but after that getting error.
For demo purpose I just returned the length of the previous msg. ray instance is getting initiated.
`AttributeError: 'NoneType' object has no attribute 'encode'`
`ray.tune.error.TuneError: The Ray Tune run failed. Please inspect the previous error messages for a cause. After fixing the issue, you can restart the run from scratch or continue this run.`
```import ray
from ray import tune, air
from ray.air import Result
from ray.tune import ResultGrid
from flytekit import Resources, task, workflow
from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
@ray.remote
def objective(config):
return (config["x"]+2)
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=2)],
runtime_env={"pip": ["numpy", "pandas"]},
)
@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1"))
def ray_task() -> int:
model_params = {
"x": tune.randint(-10, 10)
}
tuner = tune.Tuner(
objective,
tune_config=tune.TuneConfig(
num_samples=10,
max_concurrent_trials=2,
),
param_space=model_params,
)
result_grid = tuner.fit()
return len(result_grid)
@workflow
def ray_workflow() -> int:
return ray_task()```
| yes, flytekit will serialize it to pickle by default, but you could register new type transformer to serialize it to protobuf.
<https://docs.flyte.org/projects/cookbook/en/latest/auto/core/extend_flyte/custom_types.html>
|
So your ray run itself is failing
| have upgraded the ray version. it is 2.1.0 now. but still i am getting this error when i mention the result type as ResultGrid. is it compulsory to register new type transformer? is the error caused bcause of that? now the ray cluster is getting initiated but after that getting error.
For demo purpose I just returned the length of the previous msg. ray instance is getting initiated.
`AttributeError: 'NoneType' object has no attribute 'encode'`
`ray.tune.error.TuneError: The Ray Tune run failed. Please inspect the previous error messages for a cause. After fixing the issue, you can restart the run from scratch or continue this run.`
```import ray
from ray import tune, air
from ray.air import Result
from ray.tune import ResultGrid
from flytekit import Resources, task, workflow
from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
@ray.remote
def objective(config):
return (config["x"]+2)
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=2)],
runtime_env={"pip": ["numpy", "pandas"]},
)
@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1"))
def ray_task() -> int:
model_params = {
"x": tune.randint(-10, 10)
}
tuner = tune.Tuner(
objective,
tune_config=tune.TuneConfig(
num_samples=10,
max_concurrent_trials=2,
),
param_space=model_params,
)
result_grid = tuner.fit()
return len(result_grid)
@workflow
def ray_workflow() -> int:
return ray_task()```
|
Interesting- are you saying if you have 2 tasks one ray one not, the even for second ray gets init?
Do you have ray init at the module level
| Hi .. I earlier tried using ray.init() for my previous flyte task. Now how should i override the RAY engine to use default. Even after shutting down the ray instance, . I can see Ray gets initialized automatically??
|
Yes.. My script doesnt have import ray even..
Not sure where I am doing wrong here ?
| Interesting- are you saying if you have 2 tasks one ray one not, the even for second ray gets init?
Do you have ray init at the module level
|
Can you share an example for us to help
| Yes.. My script doesnt have import ray even..
Not sure where I am doing wrong here ?
|
The moment libraries are installed, ray is getting initialized.
Because of this I am getting UserWarning: The size of /dev/shm is too small (67108864 bytes). The required size at least half of RAM (33303341056 bytes). Please, delete files in /dev/shm or increase size of /dev/shm with --shm-size in Docker. Also, you can set the required memory size for each Ray worker in bytes to MODIN_MEMORY environment variable.
I m actually not using RAY here
| Can you share an example for us to help
|
Have you checked `ray status`?
| Hi Team,
We are trying to run the flyte workflow for ML training using xgboost classifier but we are getting below error. Could you please help.
```Placement group creation timed out. Make sure your cluster either has enough resources or use an autoscaling cluster. Current resources available: {'CPU': 1.0, 'object_store_memory': 217143276.0, 'memory': 434412750.0, 'node:10.69.53.118': 0.98}, resources requested by the placement group: [{'CPU': 1.0}, {'CPU': 1.0}]```
|
Kevin Su
so we spoke with Keshi Dai about the persistent cluster feature.
to clarify, you mean that if one workflow runs two ray tasks, they both end up using the same cluster right?
| hey a while back there was an RFC on ray integration that included something about support for persisting cluster resources across tasks, is that something still in progress? can someone point me to the docs?
|
yea to save the boot time right?
and possibly skip serialization?
| Kevin Su
so we spoke with Keshi Dai about the persistent cluster feature.
to clarify, you mean that if one workflow runs two ray tasks, they both end up using the same cluster right?
|
to save boot time yes. serialization still happens iiuc
<https://github.com/flyteorg/flyteplugins/tree/master/go/tasks/plugins/k8s/ray> is all the code right kevin?
| yea to save the boot time right?
and possibly skip serialization?
|
yup
| to save boot time yes. serialization still happens iiuc
<https://github.com/flyteorg/flyteplugins/tree/master/go/tasks/plugins/k8s/ray> is all the code right kevin?
|
do you have any more info on the persistent clusters? we could potentially use it to speed up our end to end workflows by quite a bit :slightly_smiling_face:
| yup
|
we don’t support it right now. we can support that, need to update backend plugin. I’ll work on it next week, and get back to you once it’s done.
| do you have any more info on the persistent clusters? we could potentially use it to speed up our end to end workflows by quite a bit :slightly_smiling_face:
|
haha that's great! but i'm mostly looking to understand the mechanics and i remember there being an RFC discussing them
we don't have an urgent timeline and are looking to plan some work
| we don’t support it right now. we can support that, need to update backend plugin. I’ll work on it next week, and get back to you once it’s done.
|
Kevin Su let’s wait on this. Dylan Wilder what we understood from Keshi Dai was that there is potential corruption that happens when a cluster is reused in ray
| haha that's great! but i'm mostly looking to understand the mechanics and i remember there being an RFC discussing them
we don't have an urgent timeline and are looking to plan some work
|
does that mean it's off the roadmap?
or just needs to be thought through more?
| Kevin Su let’s wait on this. Dylan Wilder what we understood from Keshi Dai was that there is potential corruption that happens when a cluster is reused in ray
|
its not off the roadmap
it can be done on flyte side, we dont know if ray is ready yet
but de-prioritized
| does that mean it's off the roadmap?
or just needs to be thought through more?
|
got it, thanks for the context :pray:
actually wait, "it can be done on flyte side" does this mean the infra for reusing resources exists?
| its not off the roadmap
it can be done on flyte side, we dont know if ray is ready yet
but de-prioritized
|
There is a `ClusterSelector` In ray job <https://github.com/ray-project/kuberay/blob/master/ray-operator/apis/ray/v1alpha1/rayjob_types.go#L53|CRD>, so basically we should be able to use it to run the ray job on the existing cluster. The propeller need to save the rayCluster id generated by first ray task, and the second ray task should reuse the same ray cluster by passing the cluster selector. lastly, propeller shut down the ray cluster at the end node.
| got it, thanks for the context :pray:
actually wait, "it can be done on flyte side" does this mean the infra for reusing resources exists?
|
Ketan Umare We will need this feature as well. With more complex Flyte workflows, users should be able to share Ray cluster among different Flyte tasks.
| There is a `ClusterSelector` In ray job <https://github.com/ray-project/kuberay/blob/master/ray-operator/apis/ray/v1alpha1/rayjob_types.go#L53|CRD>, so basically we should be able to use it to run the ray job on the existing cluster. The propeller need to save the rayCluster id generated by first ray task, and the second ray task should reuse the same ray cluster by passing the cluster selector. lastly, propeller shut down the ray cluster at the end node.
|
Running out of disk
Request more pleas
| Hi, was trying distributed training using ray in flyte. I am getting this error while running.
```from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
import ray
from ray import tune
#ray.init()
#ray.init("auto", ignore_reinit_error=True)
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=2)],
)
num_actors = 4
num_cpus_per_actor = 1
ray_params = RayParams(
num_actors=num_actors, cpus_per_actor=num_cpus_per_actor)
def train_model(config):
train_x, train_y = load_breast_cancer(return_X_y=True)
train_set = RayDMatrix(train_x, train_y)
evals_result = {}
bst = train(
params=config,
dtrain=train_set,
evals_result=evals_result,
evals=[(train_set, "train")],
verbose_eval=False,
ray_params=ray_params)
bst.save_model("model.xgb")
@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1"))
def train_model_task() -> dict:
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"eta": tune.loguniform(1e-4, 1e-1),
"subsample": tune.uniform(0.5, 1.0),
"max_depth": tune.randint(1, 9)
}
analysis = tune.run(
train_model,
config=config,
metric="train-error",
mode="min",
num_samples=4,
resources_per_trial=ray_params.get_tune_resources())
return analysis.best_config
@workflow
def train_model_wf() -> dict:
return train_model_task()```
|
`@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1", ephemeral_storage="500Mi"))`
| Running out of disk
Request more pleas
|
```from sklearn.datasets import load_breast_cancer
from flytekit import Resources, task, workflow
from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
import ray
from ray import tune
#ray.shutdown()
#ray.init()
#ray.init("auto", ignore_reinit_error=True)
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
)
num_actors = 2
num_cpus_per_actor = 1
ray_params = RayParams(
num_actors=num_actors, cpus_per_actor=num_cpus_per_actor)
def train_model(config):
train_x, train_y = load_breast_cancer(return_X_y=True)
train_set = RayDMatrix(train_x, train_y)
evals_result = {}
bst = train(
params=config,
dtrain=train_set,
evals_result=evals_result,
evals=[(train_set, "train")],
verbose_eval=False,
ray_params=ray_params)
bst.save_model("model.xgb")
#@task(limits=Resources(mem="2000Mi", cpu="1"))
@task(task_config=ray_config, limits=Resources(mem="3000Mi", cpu="1", ephemeral_storage="3000Mi"))
def train_model_task() -> dict:
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"eta": tune.loguniform(1e-4, 1e-1),
"subsample": tune.uniform(0.5, 1.0),
"max_depth": tune.randint(1, 9)
}
analysis = tune.run(
train_model,
config=config,
metric="train-error",
mode="min",
num_samples=4,
max_concurrent_trials=1,
resources_per_trial=ray_params.get_tune_resources())
return analysis.best_config
@workflow
def train_model_wf() -> dict:
return train_model_task()```
Still getting this error when we specify `ephemeral_storage` value also. do u have any suggested limit for cpu and memory
| `@task(task_config=ray_config, limits=Resources(mem="2000Mi", cpu="1", ephemeral_storage="500Mi"))`
|
If you’re using demo cluster, I think 1Gi is the limit.
| ```from sklearn.datasets import load_breast_cancer
from flytekit import Resources, task, workflow
from flytekitplugins.ray import HeadNodeConfig, RayJobConfig, WorkerNodeConfig
import ray
from ray import tune
#ray.shutdown()
#ray.init()
#ray.init("auto", ignore_reinit_error=True)
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
)
num_actors = 2
num_cpus_per_actor = 1
ray_params = RayParams(
num_actors=num_actors, cpus_per_actor=num_cpus_per_actor)
def train_model(config):
train_x, train_y = load_breast_cancer(return_X_y=True)
train_set = RayDMatrix(train_x, train_y)
evals_result = {}
bst = train(
params=config,
dtrain=train_set,
evals_result=evals_result,
evals=[(train_set, "train")],
verbose_eval=False,
ray_params=ray_params)
bst.save_model("model.xgb")
#@task(limits=Resources(mem="2000Mi", cpu="1"))
@task(task_config=ray_config, limits=Resources(mem="3000Mi", cpu="1", ephemeral_storage="3000Mi"))
def train_model_task() -> dict:
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"eta": tune.loguniform(1e-4, 1e-1),
"subsample": tune.uniform(0.5, 1.0),
"max_depth": tune.randint(1, 9)
}
analysis = tune.run(
train_model,
config=config,
metric="train-error",
mode="min",
num_samples=4,
max_concurrent_trials=1,
resources_per_trial=ray_params.get_tune_resources())
return analysis.best_config
@workflow
def train_model_wf() -> dict:
return train_model_task()```
Still getting this error when we specify `ephemeral_storage` value also. do u have any suggested limit for cpu and memory
|
i am trying it on EKS cluster
| If you’re using demo cluster, I think 1Gi is the limit.
|
<https://github.com/flyteorg/flyte/blob/aae01aa33eadfb86f1c952eb415f21326ea5519b/charts/flyte-core/values-eks.yaml#L216> section specifies the task resource defaults.
Can you check yours? Please increasing the mem. I believe `kubectl -n flyte edit cm flyte-admin-base-config` is the command but I’m not very sure. Let me know if this doesn’t work.
| i am trying it on EKS cluster
|
<https://github.com/flyteorg/flyte/blob/aae01aa33eadfb86f1c952eb415f21326ea5519b/charts/flyte-core/values-eks.yaml#L216> section specifies the task resource defaults.
Can you check yours? Please increasing the mem. I believe `kubectl -n flyte edit cm flyte-admin-base-config` is the command but I’m not very sure. Let me know if this doesn’t work.
|
|
Nice. Please increase your mem and try again.
| |
I increased the memory in task. the execution is getting queued but it is in pending state for long time. Even in remote run, the workflow is running for more than 4h for 4 trials but the execution is not happening.
| Nice. Please increase your mem and try again.
|
Have you seen the message saying you asked for 3 cpu and 0 gpu but the cluster has 2 cpu and 0 gpu?
| I increased the memory in task. the execution is getting queued but it is in pending state for long time. Even in remote run, the workflow is running for more than 4h for 4 trials but the execution is not happening.
|
yes but i have requested for only 1 cpu. should i change anywhere else?
```@task(task_config=ray_config, limits=Resources(mem="5000Mi", cpu="1", ephemeral_storage="3000Mi"))```
| Have you seen the message saying you asked for 3 cpu and 0 gpu but the cluster has 2 cpu and 0 gpu?
|
I think it’s because of `get_tune_resources()`.
Have you seen <https://docs.ray.io/en/releases-1.11.0/ray-more-libs/xgboost-ray.html#memory-usage> section in the doc?
I’m assuming you’re training an xgboost model.
| yes but i have requested for only 1 cpu. should i change anywhere else?
```@task(task_config=ray_config, limits=Resources(mem="5000Mi", cpu="1", ephemeral_storage="3000Mi"))```
|
```ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
)```
do we have any ways to specify the number of cpus in the ray cluster config? like this ?
```ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
num_cpus=4,
)```
bcause as mentioned above we have 64 cpus in eks cluster. but it shows this warning that we have only 2 cpus in *ray cluster*. how to increase the cpu limit in ray cluster config?
| I think it’s because of `get_tune_resources()`.
Have you seen <https://docs.ray.io/en/releases-1.11.0/ray-more-libs/xgboost-ray.html#memory-usage> section in the doc?
I’m assuming you’re training an xgboost model.
|
I believe you you can set them in `RayParams`
<https://github.com/ray-project/xgboost_ray/blob/ecca2c63385841a0a1938f5edc349893e5ac63fc/xgboost_ray/main.py>
| ```ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
)```
do we have any ways to specify the number of cpus in the ray cluster config? like this ?
```ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(ray_start_params={"log-color": "True"}),
worker_node_config=[WorkerNodeConfig(group_name="ray-group", replicas=3)],
num_cpus=4,
)```
bcause as mentioned above we have 64 cpus in eks cluster. but it shows this warning that we have only 2 cpus in *ray cluster*. how to increase the cpu limit in ray cluster config?
|
yeah but in RayParams we could specify the number of cpus that has to be utilized for each trial `cpus_per_actor` . Is there any config to be changed to increase the cpu of the ray cluster as a whole? bcause when I increased the cpus_per_actor also the requested cpu is still 2 and shows the warning that it has only 2 cpu in the cluster.
| I believe you you can set them in `RayParams`
<https://github.com/ray-project/xgboost_ray/blob/ecca2c63385841a0a1938f5edc349893e5ac63fc/xgboost_ray/main.py>
|
Kevin Su, any idea how we can set the ray cluster resources? As per the docs, it should be possible with `init()`, but in this case, since Flyte initializes the cluster, how can a user modify those values?
| yeah but in RayParams we could specify the number of cpus that has to be utilized for each trial `cpus_per_actor` . Is there any config to be changed to increase the cpu of the ray cluster as a whole? bcause when I increased the cpus_per_actor also the requested cpu is still 2 and shows the warning that it has only 2 cpu in the cluster.
|
To set Ray cluster resource, just update the `limit` and `request` in the @task. Like <https://github.com/flyteorg/flytesnacks/blob/a3b97943563cfc952b5683525763578685a93694/cookbook/integrations/kubernetes/ray_example/ray_example.py#L56|https://github.com/flyteorg/flytesnacks/blob/a3b97943563cfc952b5683525763578685a93[…]694/cookbook/integrations/kubernetes/ray_example/ray_example.py>
| Kevin Su, any idea how we can set the ray cluster resources? As per the docs, it should be possible with `init()`, but in this case, since Flyte initializes the cluster, how can a user modify those values?
|
```@task(task_config=ray_config, requests=Resources(mem="5000Mi", cpu="5", ephemeral_storage="1000Mi"), limits=Resources(mem="7000Mi", cpu="9", ephemeral_storage="2000Mi"))```
I have requested for 5 cpus but when it executes it shows requested cpus as 2 only.
and show same warning too that we have only 2 cpu in the cluster.
| To set Ray cluster resource, just update the `limit` and `request` in the @task. Like <https://github.com/flyteorg/flytesnacks/blob/a3b97943563cfc952b5683525763578685a93694/cookbook/integrations/kubernetes/ray_example/ray_example.py#L56|https://github.com/flyteorg/flytesnacks/blob/a3b97943563cfc952b5683525763578685a93[…]694/cookbook/integrations/kubernetes/ray_example/ray_example.py>
|
I’m wondering where it’s picking “you asked for 9.0 cpu” from. Is it from your `limits`?
| ```@task(task_config=ray_config, requests=Resources(mem="5000Mi", cpu="5", ephemeral_storage="1000Mi"), limits=Resources(mem="7000Mi", cpu="9", ephemeral_storage="2000Mi"))```
I have requested for 5 cpus but when it executes it shows requested cpus as 2 only.
and show same warning too that we have only 2 cpu in the cluster.
|
I think it is based on the resource requested per trial. when i specified cpus_per_trial and num_actors as 2 and 4 it showed requested cpus as 9. when i decreased the resource requested and num actors as 2 and 1 it showed 3.
When the cpus_per_trial and num_actors are 1, the actual requested cpu is 2 and the execution is happening fine since we have sufficient 2 cpus in the cluster. when the num_actors are increased it requests for more cpus so the execution is not happening.
| I’m wondering where it’s picking “you asked for 9.0 cpu” from. Is it from your `limits`?
|
Um got it. We need to find a way to increase the cluster resources. Not sure why `requests` isn’t assigning the requested resources to the cluster.
| I think it is based on the resource requested per trial. when i specified cpus_per_trial and num_actors as 2 and 4 it showed requested cpus as 9. when i decreased the resource requested and num actors as 2 and 1 it showed 3.
When the cpus_per_trial and num_actors are 1, the actual requested cpu is 2 and the execution is happening fine since we have sufficient 2 cpus in the cluster. when the num_actors are increased it requests for more cpus so the execution is not happening.
|
yeah. kindly notify if there is any way to do so.
| Um got it. We need to find a way to increase the cluster resources. Not sure why `requests` isn’t assigning the requested resources to the cluster.
|
Kevin Su, do you have any ideas?
| yeah. kindly notify if there is any way to do so.
|
Priya Could you describe the RayJob (kubectl describe) and check if the resource is same as you specify in the @task. I guess the head node doesn’t use all the cpu in the pod. In other words, the cpu of head pod could be 10, but cpu of the head node process in the pod could be 2.
| Kevin Su, do you have any ideas?
|
I have attached the allocated memory when we describe the node.
```@task(task_config=ray_config, requests=Resources(mem="5000Mi", cpu="5") , limits=Resources(mem="7000Mi", cpu="9"))```
This is the requested resources.
| Priya Could you describe the RayJob (kubectl describe) and check if the resource is same as you specify in the @task. I guess the head node doesn’t use all the cpu in the pod. In other words, the cpu of head pod could be 10, but cpu of the head node process in the pod could be 2.
|
sorry, could you describe the rayJob you are running?
| I have attached the allocated memory when we describe the node.
```@task(task_config=ray_config, requests=Resources(mem="5000Mi", cpu="5") , limits=Resources(mem="7000Mi", cpu="9"))```
This is the requested resources.
|
is there any command for this
This is the shown when we describe the kuberay-operator while running.
| sorry, could you describe the rayJob you are running?
|
cc: Kevin Su
| Hi, while initiating ray cluster, the task is running in only one instance and pod. Generally if a ray cluster is initiated it is expected to run in different instance in distributed manner right? can we do horizontal scaling here to increase the pool of resources here?
|
hmm, if ray task is started, propeller should create head node and workers nodes. did you enable the ray plugin in propeller?
```tasks:
task-plugins:
enabled-plugins:
- container
- sidecar
- k8s-array
- ray
default-for-task-types:
container: container
sidecar: sidecar
container_array: k8s-array
ray: ray```
| cc: Kevin Su
|
Yeah ray plugin is enabled
| hmm, if ray task is started, propeller should create head node and workers nodes. did you enable the ray plugin in propeller?
```tasks:
task-plugins:
enabled-plugins:
- container
- sidecar
- k8s-array
- ray
default-for-task-types:
container: container
sidecar: sidecar
container_array: k8s-array
ray: ray```
|
is there any error in the kuberay operator?
| Yeah ray plugin is enabled
|
not sure. how to check if it works fine?
| is there any error in the kuberay operator?
|
kubectl logs <kuberay-operator> -n ray-system
| not sure. how to check if it works fine?
|
kubectl logs <kuberay-operator> -n ray-system
|
|
have you installed ingress controller? if not, it will cause an error in kuberay, kuberay use ingress controller to create a new ingress route for RayJob
| |
yes ingress controller is installed in the setup
| have you installed ingress controller? if not, it will cause an error in kuberay, kuberay use ingress controller to create a new ingress route for RayJob
|
Priya do you have couple mins to hop on a call?
| yes ingress controller is installed in the setup
|
sure ... pls let me know ur feasible timings
| Priya do you have couple mins to hop on a call?
|
maybe 9~12 AM in your time
| sure ... pls let me know ur feasible timings
|
Sorry for the inconvenience Kevin Su. We were having live demo so couldn't work on the setup. Will tomorrow same time work for u ?
| maybe 9~12 AM in your time
|
No worries, yes, ping me tomorrow when you are available
| Sorry for the inconvenience Kevin Su. We were having live demo so couldn't work on the setup. Will tomorrow same time work for u ?
|
Hi actually once the helm is upgraded I am able to see the worker pods getting created. But the issue now is that the task is getting queued for a long time it is not getting initiated. It gets `The node was low on resource: ephemeral-storage` and it is trying to initiate a new pod but we have enough ephemeral storage in the instance.
The docker image that we are trying to pull is nearly 10gb. will that be an issue? shall we connect by tomorrow mrng 9 AM on my time? can u confirm on where to connect through slack or google meet?
| No worries, yes, ping me tomorrow when you are available
|
yes. update the propeller config map. change ttl to 0
<https://docs.flyte.org/en/latest/deployment/cluster_config/flytepropeller_config.html#ray-ray-config>
| Hi! Is there a way to shorten `ttlSecondsAfterFinished`? By default, it is 3600s (1 hour) and we’d like to tear down a cluster right after a job is complete. Thanks for your help!
```$ k describe rayjobs feb5da8c2a2394fb4ac8-n0-0 -n flytesnacks-development
...
Ttl Seconds After Finished: 3600```
|
Thanks for your prompt reply! Let me try this!
It worked like a charm!
```$ kubectl describe rayjobs f3281d8b2689c4c35a67-n0-0 -n flytesnacks-development
Ttl Seconds After Finished: 60```
For those who want to do the same, add this `ray.ttlSecondsAfterFinished` to the values.yaml for flyte-core.
``` # -- Kubernetes specific Flyte configuration
k8s:
plugins:
ray:
ttlSecondsAfterFinished: 60```
| yes. update the propeller config map. change ttl to 0
<https://docs.flyte.org/en/latest/deployment/cluster_config/flytepropeller_config.html#ray-ray-config>
|
nice!
| Thanks for your prompt reply! Let me try this!
It worked like a charm!
```$ kubectl describe rayjobs f3281d8b2689c4c35a67-n0-0 -n flytesnacks-development
Ttl Seconds After Finished: 60```
For those who want to do the same, add this `ray.ttlSecondsAfterFinished` to the values.yaml for flyte-core.
``` # -- Kubernetes specific Flyte configuration
k8s:
plugins:
ray:
ttlSecondsAfterFinished: 60```
|
Priya and Kevin Su please file the issue with Ray.
seems like KubeRay is buggy
| FYI: Priya and I found some issues when running the task on Kuberay 0.4.0. if you get any error as well, please downgrade to the 0.3.0 first. I’ll take a look into it at the end of this month.
one of the issues is that ray job status is always “queued”
and some other issues can be found in this <https://flyte-org.slack.com/archives/C049Q7GDWN9/p1670512286551649|thread>
|
Kevin Su, could it be a backend error?
| Hi, in KubeRay version 0.3.0 while trying to perform ray training in remote using `pyflyte --config ~/.flyte/config-remote.yaml run --remote --image <image_name> ray_demo.py wf` , I am getting this issue in logs and the task is getting queued in the console. When the same is executed in local using `pyflyte --config ~/.flyte/config-remote.yaml run --image <image_name> ray_demo.py wf` , it works fine.
|
There's a similar thread on <#CP2HDHKE1|ask-the-community>: <https://flyte-org.slack.com/archives/CP2HDHKE1/p1672750846054829>.
Could you revive that thread? I'll ping my team to respond.
| Hi, I have a doubt regarding scaling of nodes. Do we have options to make each worker pod run in different node so that the node will spawn 'n' number of nodes with a less memory instance?
for eg,
Now if I request for 8G memory and 4 CPU and request for 4 replicas, the node is spawning an instance with higher GB instance and trying to accommodate all worker nodes in single node. Instead I need an approach where each worker pod should schedule in 4 different node with less GB instance.
Do we have any way to achieve this scaling?
|
cc Daniel Rammer do you know if we can specify the pod spec for ray jobs?
is this part of the work you are doing?
| Hello! I am running Ray on Flyte. I am getting a warning about Ray using /tmp instead of /dev/shm because /dev/shm has only 67108864 bytes available.
To fix this, I _would_ just specify --shm-size=3.55gb when running the container. But Flyte is running the containers for us, so I cannot figure out how to specify any run options.
Is there a way for specifying run options for the containers that Flyte runs?
Full text of warning attached.
|
Ketan Umare yes!
| cc Daniel Rammer do you know if we can specify the pod spec for ray jobs?
is this part of the work you are doing?
|
awesome Ruksana Kabealo
| Ketan Umare yes!
|
Hey Ruksana Kabealo, I'm looking into this a bit and not finding a definitive answer. So Flyte uses the <https://github.com/ray-project/kuberay|kuberay project> to launch Ray tasks, basically it <https://github.com/flyteorg/flyteplugins/blob/4634a81403e501882e3b1f39bcbd229b78768a4e/go/tasks/plugins/k8s/ray/ray.go#L155-L161|creates an instance of the RayJob> which is then executed. So basically, we need to figure out what configuration we need to change on the RayJob CR to support this. I have found <https://github.com/ray-project/kuberay/issues/201|this issue> which seems to indicate that these are simple memory requests / limits. Does this sound correct?
| awesome Ruksana Kabealo
|
Hey Daniel Rammer ! Yes, it should be a simple memory limit change to expand the size of /dev/shm
| Hey Ruksana Kabealo, I'm looking into this a bit and not finding a definitive answer. So Flyte uses the <https://github.com/ray-project/kuberay|kuberay project> to launch Ray tasks, basically it <https://github.com/flyteorg/flyteplugins/blob/4634a81403e501882e3b1f39bcbd229b78768a4e/go/tasks/plugins/k8s/ray/ray.go#L155-L161|creates an instance of the RayJob> which is then executed. So basically, we need to figure out what configuration we need to change on the RayJob CR to support this. I have found <https://github.com/ray-project/kuberay/issues/201|this issue> which seems to indicate that these are simple memory requests / limits. Does this sound correct?
|
Have you tried using the <https://github.com/flyteorg/flytesnacks/blob/master/cookbook/deployment/customizing_resources.py|task resource requests / limits>? IIUC Flytes Ray plugin <https://github.com/flyteorg/flyteplugins/blob/4634a81403e501882e3b1f39bcbd229b78768a4e/go/tasks/plugins/k8s/ray/ray.go#L72-L79|uses those to set the container-level requests>.
| Hey Daniel Rammer ! Yes, it should be a simple memory limit change to expand the size of /dev/shm
|
Hey Kevin Su Ketan Umare we are trying to make Flyte work for our internal Flyte cluster setup. Abdullah Mobeen opened this PR to enable the inter-cluster communication feature for Ray plugin. Could you guys help take a look? Thank you so much!
| Hi, we recently opened a <https://github.com/flyteorg/flyteplugins/pull/321|pull request> to address the following <https://github.com/flyteorg/flyte/issues/2883|issue> (inter-cluster communication between Flyte and custom Ray cluster). Can someone please review it? It adds to a product Spotify is building that is integral to our machine learning platform.
cc Keshi Dai
|
Thanks, reviewing
| Hey Kevin Su Ketan Umare we are trying to make Flyte work for our internal Flyte cluster setup. Abdullah Mobeen opened this PR to enable the inter-cluster communication feature for Ray plugin. Could you guys help take a look? Thank you so much!
|
:+1:
cc Dylan Wilder
| Thanks, reviewing
|
Technically, the changes are similar to what Spotify did for the <https://github.com/spotify/flyte-flink-plugin|Flyte-Flink plugin>. Our data infra team also added context to the issue I linked. Thanks!
| :+1:
cc Dylan Wilder
|
cc Daniel Rammer to review as well
| Technically, the changes are similar to what Spotify did for the <https://github.com/spotify/flyte-flink-plugin|Flyte-Flink plugin>. Our data infra team also added context to the issue I linked. Thanks!
|
timely :smile:
| cc Daniel Rammer to review as well
|
Looks great, merged, thanks Abdullah Mobeen! Now we would just need to update the flyteplugin dependency version in flytepropeller. Is this something you're looking for an immediate propeller release on or are you building your own image anyways?
| timely :smile:
|
Thanks a lot Daniel Rammer! Yess -- Since we prefer to always stay on a stable Flyte release, it is better if we make a new release to cover this plugin
| Looks great, merged, thanks Abdullah Mobeen! Now we would just need to update the flyteplugin dependency version in flytepropeller. Is this something you're looking for an immediate propeller release on or are you building your own image anyways?
|
Stable Flyte release will be 1.4
| Thanks a lot Daniel Rammer! Yess -- Since we prefer to always stay on a stable Flyte release, it is better if we make a new release to cover this plugin
|
Ketan Umare, what’s the rough timeline for Flyte 1.4 release?
| Stable Flyte release will be 1.4
|
So 1.4 is the current stable, 1.5 will be end of month (maybe first week of April) since we switched to a monthly release cycle. I opened <https://github.com/flyteorg/flytepropeller/pull/542|this PR> to get the plugin updates merged into propeller and will make sure this is merged for the 1.5 release.
| Ketan Umare, what’s the rough timeline for Flyte 1.4 release?
|
Peter Klingelhofer, you cannot register workflows present in ipynb files. You can, however, use FlyteRemote to register the tasks and workflows.
<https://docs.flyte.org/projects/flytekit/en/latest/design/control_plane.html#registering-entities>
You can include this code in a separate cell in your jupyter notebook and run it.
| Hi all, I created an issue <https://github.com/flyteorg/flyte/issues/3588|here> before realizing there was a Slack. Any ideas as to why the Python Ray example (from <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/ray_example/ray_example.html|the docs>) registers its workflow just fine, but the Jupyter Notebook example doesn't find any entities? I'm probably missing something obvious so apologies if that's the case.
I noticed that VS Code thinks there is a `\n` after `@workflow`(unsurprising as Jupyter Notebooks are typically run in the browser obviously), not sure if that could be causing the problem.
|
Thank you for the quick response.
I think I'm having trouble figuring out what my flyte_entity should be. Let's assume my project name is `repo`, and I have the `ray_example.ipynb` file in the `workflows` folder, and I'm trying to add the workflow to the `development` domain.
I was adding a new separate cell to the bottom of the Jupyter Notebook `ray_example.ipynb` file like so:
```from flytekit.remote import FlyteRemote
from flytekit.configuration import Config, SerializationSettings, ImageConfig
# Using image pushed to local registry at localhost:30000
img = ImageConfig.from_images(
"localhost:30000/repo:latest", {"repo": "localhost:30000/repo:latest"}
)
# FlyteRemote object is the main entrypoint to API
remote = FlyteRemote(
config=Config.for_sandbox(),
default_project="repo",
default_domain="development",
)
# Get Task
# flyte_task = remote.fetch_task(name="workflows.ray_example", version="v1")
flyte_task = remote.fetch_task(
name="workflows.ray_example",
version="v1",
project="repo",
domain="development",
)
flyte_task = remote.register_task(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v2",
)
flyte_workflow = remote.register_workflow(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v1",
)
flyte_launch_plan = remote.register_launch_plan(entity=flyte_task, version="v1")```
Yet I still receive the `FlyteEntityNotExistException`. Apologies if the answer is obvious. Thank you again so much for any help/assistance you can provide!
| Peter Klingelhofer, you cannot register workflows present in ipynb files. You can, however, use FlyteRemote to register the tasks and workflows.
<https://docs.flyte.org/projects/flytekit/en/latest/design/control_plane.html#registering-entities>
You can include this code in a separate cell in your jupyter notebook and run it.
|
I'm assuming you've not registered the flyte task yet. In that case, you needn't fetch the task. Directly register it. Check out <https://docs.flyte.org/projects/cookbook/en/latest/auto/case_studies/feature_engineering/feast_integration/Feast_Flyte_Demo.html> example.
| Thank you for the quick response.
I think I'm having trouble figuring out what my flyte_entity should be. Let's assume my project name is `repo`, and I have the `ray_example.ipynb` file in the `workflows` folder, and I'm trying to add the workflow to the `development` domain.
I was adding a new separate cell to the bottom of the Jupyter Notebook `ray_example.ipynb` file like so:
```from flytekit.remote import FlyteRemote
from flytekit.configuration import Config, SerializationSettings, ImageConfig
# Using image pushed to local registry at localhost:30000
img = ImageConfig.from_images(
"localhost:30000/repo:latest", {"repo": "localhost:30000/repo:latest"}
)
# FlyteRemote object is the main entrypoint to API
remote = FlyteRemote(
config=Config.for_sandbox(),
default_project="repo",
default_domain="development",
)
# Get Task
# flyte_task = remote.fetch_task(name="workflows.ray_example", version="v1")
flyte_task = remote.fetch_task(
name="workflows.ray_example",
version="v1",
project="repo",
domain="development",
)
flyte_task = remote.register_task(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v2",
)
flyte_workflow = remote.register_workflow(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v1",
)
flyte_launch_plan = remote.register_launch_plan(entity=flyte_task, version="v1")```
Yet I still receive the `FlyteEntityNotExistException`. Apologies if the answer is obvious. Thank you again so much for any help/assistance you can provide!
|
Thank you for your response Samhita Alla. I believe in my code snippet above, that's what I've done in this section:
```flyte_task = remote.register_task(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v2",
)```
Interestingly, this user suggests that registering workflows inside Jupyter notebooks is not possible: <https://github.com/flyteorg/flyte/issues/3588#issuecomment-1509599891>
If it is indeed possible, I would be happy to work on an MR to add an example Jupyter Notebook file to the Ray example (<https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/ray_example/ray_example.html>), just need to figure out how to get an example workflow working via a Jupyter Notebook. I'm just pushing the Docker image to the local registry at `localhost:30000`, which is what I would think would be the simplest implementation possible to run a workflow.
I do notice that it looks like there is an example with Papermill, but obviously that's not Ray: <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/papermilltasks/simple.html#sphx-glr-auto-integrations-flytekit-plugins-papermilltasks-simple-py|https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/papermilltasks/simple.html#sphx-glr-auto-in[…]ltasks-simple-py>
| I'm assuming you've not registered the flyte task yet. In that case, you needn't fetch the task. Directly register it. Check out <https://docs.flyte.org/projects/cookbook/en/latest/auto/case_studies/feature_engineering/feast_integration/Feast_Flyte_Demo.html> example.
|
Papermill is for running jupyter notebook as a flyte task. In your case, I assume you're trying to register tasks and workflows that are present within your jupyter notebook which is absolutely possible. What Kevin Su's telling is that you cannot register code present in your Jupyter with `pyflyte run` or `pyflyte register`. You need to use FlyteRemote to register your code. Can you try registering by following the example I've sent earlier?
| Thank you for your response Samhita Alla. I believe in my code snippet above, that's what I've done in this section:
```flyte_task = remote.register_task(
entity=flyte_task,
serialization_settings=SerializationSettings(image_config=None),
version="v2",
)```
Interestingly, this user suggests that registering workflows inside Jupyter notebooks is not possible: <https://github.com/flyteorg/flyte/issues/3588#issuecomment-1509599891>
If it is indeed possible, I would be happy to work on an MR to add an example Jupyter Notebook file to the Ray example (<https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/ray_example/ray_example.html>), just need to figure out how to get an example workflow working via a Jupyter Notebook. I'm just pushing the Docker image to the local registry at `localhost:30000`, which is what I would think would be the simplest implementation possible to run a workflow.
I do notice that it looks like there is an example with Papermill, but obviously that's not Ray: <https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/papermilltasks/simple.html#sphx-glr-auto-integrations-flytekit-plugins-papermilltasks-simple-py|https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/flytekit_plugins/papermilltasks/simple.html#sphx-glr-auto-in[…]ltasks-simple-py>
|
Thanks so much for the help Samhita Alla. I closed my GitHub issue as I did get the workflow to successfully register via importing the Jupyter Notebook via Papermill.
However I'm still curious about FlyteRemote, I set up the FlyteRemote syntax, but I see that you said here that you can't use `pyflyte run` or `pyflyte register`, and I don't really see in the documentation regarding FlyteRemote what the equivalent commands would be to register workflows via FlyteRemote would be. If I'm using FlyteRemote, what command would need to be run to register workflows, since we can't use `pyflyte register`? Apologies again for the confusion on my part.
| Papermill is for running jupyter notebook as a flyte task. In your case, I assume you're trying to register tasks and workflows that are present within your jupyter notebook which is absolutely possible. What Kevin Su's telling is that you cannot register code present in your Jupyter with `pyflyte run` or `pyflyte register`. You need to use FlyteRemote to register your code. Can you try registering by following the example I've sent earlier?
|
No problem! You'd have to use `register_task` / `register_workflow` / `register_launch_plan` / `register_script` function. FlyteRemote is a Python API. You can use it to programmatically register your code.
<https://github.com/flyteorg/flytekit/blob/e865db57d3bfbb7fb997417b052a05bc871cb0ed/flytekit/remote/remote.py>
| Thanks so much for the help Samhita Alla. I closed my GitHub issue as I did get the workflow to successfully register via importing the Jupyter Notebook via Papermill.
However I'm still curious about FlyteRemote, I set up the FlyteRemote syntax, but I see that you said here that you can't use `pyflyte run` or `pyflyte register`, and I don't really see in the documentation regarding FlyteRemote what the equivalent commands would be to register workflows via FlyteRemote would be. If I'm using FlyteRemote, what command would need to be run to register workflows, since we can't use `pyflyte register`? Apologies again for the confusion on my part.
|
This is fantastic
Yes we have started but not long way. The work is not a lot, but has a couple parts
| Hello Kevin Su Ketan Umare, This is Keshi from Spotify and we are evaluating Ray internally and would like to integrate it with Flyte. I heard from Jiaxin Shan that you guys have already started working on Flyte plugin for Ray and I’m interested in learning more on this.
|
Awesome! I’m happy to help and contribute. Do you think if we can collaborate on this?
At least from Spotify side, we would like to make sure the design would work for our setup. Maybe it’s worth having a sync on this?
| This is fantastic
Yes we have started but not long way. The work is not a lot, but has a couple parts
|
I think so
When would be a good time
| Awesome! I’m happy to help and contribute. Do you think if we can collaborate on this?
At least from Spotify side, we would like to make sure the design would work for our setup. Maybe it’s worth having a sync on this?
|
That’s awesome! Is your team in the west cost? A few slots (all based on NYC time) that will work for me:
• Monday May 23 after 3:30PM
• Tuesday May 24 after 2PM
• Wednesday May 25 2-3:30PM or after 4PM
Let me know which time works for you guys!
| I think so
When would be a good time
|
Keshi Dai Could I have your email, I’ll send you a meeting link
| That’s awesome! Is your team in the west cost? A few slots (all based on NYC time) that will work for me:
• Monday May 23 after 3:30PM
• Tuesday May 24 after 2PM
• Wednesday May 25 2-3:30PM or after 4PM
Let me know which time works for you guys!
|