metadata
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1490
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: >-
How can I configure the orchestrator settings for each cloud provider in
ZenML?
sentences:
- >-
. If not set, the cluster will not be autostopped.down: Tear down the
cluster after all jobs finish (successfully or abnormally). If
idle_minutes_to_autostop is also set, the cluster will be torn down
after the specified idle time. Note that if errors occur during
provisioning/data syncing/setting up, the cluster will not be torn down
for debugging purposes.
stream_logs: If True, show the logs in the terminal as they are
generated while the cluster is running.
docker_run_args: Additional arguments to pass to the docker run command.
For example, ['--gpus=all'] to use all GPUs available on the VM.
The following code snippets show how to configure the orchestrator
settings for each cloud provider:
Code Example:
from
zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor
import SkypilotAWSOrchestratorSettings
skypilot_settings = SkypilotAWSOrchestratorSettings(
cpus="2",
memory="16",
accelerators="V100:2",
accelerator_args={"tpu_vm": True, "runtime_version": "tpu-vm-base"},
use_spot=True,
spot_recovery="recovery_strategy",
region="us-west-1",
zone="us-west1-a",
image_id="ami-1234567890abcdef0",
disk_size=100,
disk_tier="high",
cluster_name="my_cluster",
retry_until_up=True,
idle_minutes_to_autostop=60,
down=True,
stream_logs=True
docker_run_args=["--gpus=all"]
@pipeline(
settings={
"orchestrator.vm_aws": skypilot_settings
Code Example:
from
zenml.integrations.skypilot_gcp.flavors.skypilot_orchestrator_gcp_vm_flavor
import SkypilotGCPOrchestratorSettings
skypilot_settings = SkypilotGCPOrchestratorSettings(
cpus="2",
memory="16",
accelerators="V100:2",
accelerator_args={"tpu_vm": True, "runtime_version": "tpu-vm-base"},
use_spot=True,
spot_recovery="recovery_strategy",
region="us-west1",
zone="us-west1-a",
image_id="ubuntu-pro-2004-focal-v20231101",
disk_size=100,
disk_tier="high",
cluster_name="my_cluster",
retry_until_up=True,
idle_minutes_to_autostop=60,
down=True,
stream_logs=True
@pipeline(
settings={
"orchestrator.vm_gcp": skypilot_settings
- >-
he Post-execution workflow has changed as follows:The get_pipelines and
get_pipeline methods have been moved out of the Repository (i.e. the new
Client ) class and lie directly in the post_execution module now. To use
the user has to do:
from zenml.post_execution import get_pipelines, get_pipeline
New methods to directly get a run have been introduced: get_run and
get_unlisted_runs method has been introduced to get unlisted runs.
Usage remains largely similar. Please read the new docs for
post-execution to inform yourself of what further has changed.
How to migrate: Replace all post-execution workflows from the paradigm
of Repository.get_pipelines or Repository.get_pipeline_run to the
corresponding post_execution methods.
π‘Future Changes
While this rehaul is big and will break previous releases, we do have
some more work left to do. However we also expect this to be the last
big rehaul of ZenML before our 1.0.0 release, and no other release will
be so hard breaking as this one. Currently planned future breaking
changes are:
Following the metadata store, the secrets manager stack component might
move out of the stack.
ZenML StepContext might be deprecated.
π Reporting Bugs
While we have tried our best to document everything that has changed, we
realize that mistakes can be made and smaller changes overlooked. If
this is the case, or you encounter a bug at any time, the ZenML core
team and community are available around the clock on the growing Slack
community.
For bug reports, please also consider submitting a GitHub Issue.
Lastly, if the new changes have left you desiring a feature, then
consider adding it to our public feature voting board. Before doing so,
do check what is already on there and consider upvoting the features you
desire the most.
PreviousMigration guide
NextMigration guide 0.23.0 β 0.30.0
Last updated 12 days ago
- >-
nML, namely an orchestrator and an artifact store.Keep in mind, that
each one of these components is built on top of base abstractions and is
completely extensible.
Orchestrator
An Orchestrator is a workhorse that coordinates all the steps to run in
a pipeline. Since pipelines can be set up with complex combinations of
steps with various asynchronous dependencies between them, the
orchestrator acts as the component that decides what steps to run and
when to run them.
ZenML comes with a default local orchestrator designed to run on your
local machine. This is useful, especially during the exploration phase
of your project. You don't have to rent a cloud instance just to try out
basic things.
Artifact Store
An Artifact Store is a component that houses all data that pass through
the pipeline as inputs and outputs. Each artifact that gets stored in
the artifact store is tracked and versioned and this allows for
extremely useful features like data caching which speeds up your
workflows.
Similar to the orchestrator, ZenML comes with a default local artifact
store designed to run on your local machine. This is useful, especially
during the exploration phase of your project. You don't have to set up a
cloud storage system to try out basic things.
Flavor
ZenML provides a dedicated base abstraction for each stack component
type. These abstractions are used to develop solutions, called Flavors,
tailored to specific use cases/tools. With ZenML installed, you get
access to a variety of built-in and integrated Flavors for each
component type, but users can also leverage the base abstractions to
create their own custom flavors.
Stack Switching
When it comes to production-grade solutions, it is rarely enough to just
run your workflow locally without including any cloud infrastructure.
- source_sentence: How can I fetch artifacts from other pipelines within a step using ZenML?
sentences:
- >2-
ββ βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β
N/A
β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β
default
β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β
default
β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β
β
β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-05-19
09:15:12.882929 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-05-19
09:15:12.882930 β
ββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
βββββββββββββββββββββ―βββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββββββΌβββββββββββββ¨
β project_id β zenml-core β
β ββββββββββββββββββββΌβββββββββββββ¨
β user_account_json β [HIDDEN] β
βββββββββββββββββββββ·βββββββββββββ
Local client provisioning
The local gcloud CLI, the Kubernetes kubectl CLI and the Docker CLI can
be configured with credentials extracted from or generated by a
compatible GCP Service Connector. Please note that unlike the
configuration made possible through the GCP CLI, the Kubernetes and
Docker credentials issued by the GCP Service Connector have a short
lifetime and will need to be regularly refreshed. This is a byproduct of
implementing a high-security profile.
- >-
gmax(prediction.numpy())
return classes[maxindex]The custom predict function should get the model
and the input data as arguments and return the model predictions. ZenML
will automatically take care of loading the model into memory and
starting the seldon-core-microservice that will be responsible for
serving the model and running the predict function.
After defining your custom predict function in code, you can use the
seldon_custom_model_deployer_step to automatically build your function
into a Docker image and deploy it as a model server by setting the
predict_function argument to the path of your custom_predict function:
from zenml.integrations.seldon.steps import
seldon_custom_model_deployer_step
from zenml.integrations.seldon.services import SeldonDeploymentConfig
from zenml import pipeline
@pipeline
def seldon_deployment_pipeline():
model = ...
seldon_custom_model_deployer_step(
model=model,
predict_function="<PATH.TO.custom_predict>",
code
service_config=SeldonDeploymentConfig(
model_name="<MODEL_NAME>",
replicas=1,
implementation="custom",
resources=SeldonResourceRequirements(
limits={"cpu": "200m", "memory": "250Mi"}
),
serviceAccountName="kubernetes-service-account",
),
Advanced Custom Code Deployment with Seldon Core Integration
Before creating your custom model class, you should take a look at the
custom Python model section of the Seldon Core documentation.
The built-in Seldon Core custom deployment step is a good starting point
for deploying your custom models. However, if you want to deploy more
than the trained model, you can create your own custom class and a
custom step to achieve this.
See the ZenML custom Seldon model class as a reference.
PreviousMLflow
NextBentoML
Last updated 15 days ago
- >-
Get arbitrary artifacts in a step
Not all artifacts need to come through the step interface from direct
upstream steps.
As described in the metadata guide, the metadata can be fetched with the
client, and this is how you would use it to fetch it within a step. This
allows you to fetch artifacts from other upstream steps or even
completely different pipelines.
from zenml.client import Client
from zenml import step
@step
def my_step():
client = Client()
output = client.get_artifact_version("my_dataset", "my_version")
output.run_metadata["accuracy"].value
This is one of the ways you can access artifacts that have already been
created and stored in the artifact store. This can be useful when you
want to use artifacts from other pipelines or steps that are not
directly upstream.
See Also
Managing artifacts - learn about the ExternalArtifact type and how to
pass artifacts between steps.
PreviousOrganize data with tags
NextHandle custom data types
Last updated 15 days ago
- source_sentence: Where can I find more information about using Feast in ZenML?
sentences:
- >-
hat's described on the feast page: How to use it?.PreviousDevelop a
Custom Model Registry
NextFeast
Last updated 1 year ago
- >-
other remote stack components also running in AWS.This method uses the
implicit AWS authentication available in the environment where the ZenML
code is running. On your local machine, this is the quickest way to
configure an S3 Artifact Store. You don't need to supply credentials
explicitly when you register the S3 Artifact Store, as it leverages the
local credentials and configuration that the AWS CLI stores on your
local machine. However, you will need to install and set up the AWS CLI
on your machine as a prerequisite, as covered in the AWS CLI
documentation, before you register the S3 Artifact Store.
Certain dashboard functionality, such as visualizing or deleting
artifacts, is not available when using an implicitly authenticated
artifact store together with a deployed ZenML server because the ZenML
server will not have permission to access the filesystem.
The implicit authentication method also needs to be coordinated with
other stack components that are highly dependent on the Artifact Store
and need to interact with it directly to work. If these components are
not running on your machine, they do not have access to the local AWS
CLI configuration and will encounter authentication failures while
trying to access the S3 Artifact Store:
Orchestrators need to access the Artifact Store to manage pipeline
artifacts
Step Operators need to access the Artifact Store to manage step-level
artifacts
Model Deployers need to access the Artifact Store to load served models
To enable these use-cases, it is recommended to use an AWS Service
Connector to link your S3 Artifact Store to the remote S3 bucket.
To set up the S3 Artifact Store to authenticate to AWS and access an S3
bucket, it is recommended to leverage the many features provided by the
AWS Service Connector such as auto-configuration, best security
practices regarding long-lived credentials and fine-grained access
control and reusing the same credentials across multiple stack
components.
- >2-
us know!
Configuration at pipeline or step levelWhen running your ZenML pipeline
with the Sagemaker orchestrator, the configuration set when configuring
the orchestrator as a ZenML component will be used by default. However,
it is possible to provide additional configuration at the pipeline or
step level. This allows you to run whole pipelines or individual steps
with alternative configurations. For example, this allows you to run the
training process with a heavier, GPU-enabled instance type, while
running other steps with lighter instances.
Additional configuration for the Sagemaker orchestrator can be passed
via SagemakerOrchestratorSettings. Here, it is possible to configure
processor_args, which is a dictionary of arguments for the Processor.
For available arguments, see the Sagemaker documentation . Currently, it
is not possible to provide custom configuration for the following
attributes:
image_uri
instance_count
sagemaker_session
entrypoint
base_job_name
env
For example, settings can be provided in the following way:
sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(
processor_args={
"instance_type": "ml.t3.medium",
"volume_size_in_gb": 30
They can then be applied to a step as follows:
@step(settings={"orchestrator.sagemaker":
sagemaker_orchestrator_settings})
For example, if your ZenML component is configured to use ml.c5.xlarge
with 400GB additional storage by default, all steps will use it except
for the step above, which will use ml.t3.medium with 30GB additional
storage.
Check out this docs page for more information on how to specify settings
in general.
For more information and a full list of configurable attributes of the
Sagemaker orchestrator, check out the SDK Docs .
S3 data access in ZenML steps
- source_sentence: How is the AWS region specified in the configuration for ZenML?
sentences:
- >-
ge or if the ZenML version doesn't change at all).a backup file or
database is created before every database migration attempt (i.e. during
every Helm upgrade). If a backup already exists (i.e. persisted in a
persistent volume or backup database), it is overwritten.
the persistent backup file or database is cleaned up after the migration
is completed successfully or if the database doesn't need to undergo a
migration. This includes backups created by previous failed migration
attempts.
the persistent backup file or database is NOT cleaned up after a failed
migration. This allows the user to manually inspect and/or apply the
backup if the automatic recovery fails.
The following example shows how to configure the ZenML server to use a
persistent volume to store the database dump file:
zenml:
database:
url: "mysql://admin:[email protected]:3306/zenml"
backupStrategy: dump-file
backupPVStorageSize: 1Gi
podSecurityContext:
fsGroup: 1000
necessarily be set.
PreviousDeploy with Docker
NextDeploy using HuggingFace Spaces
Last updated 15 days ago
- >-
π²Control logging
Configuring ZenML's default logging behavior
ZenML produces various kinds of logs:
The ZenML Server produces server logs (like any FastAPI server).
The Client or Runner environment produces logs, for example after
running a pipeline. These are steps that are typically before, after,
and during the creation of a pipeline run.
The Execution environment (on the orchestrator level) produces logs when
it executes each step of a pipeline. These are logs that are typically
written in your steps using the python logging module.
This section talks about how users can control logging behavior in these
various environments.
PreviousTrain with GPUs
NextView logs on the dashboard
Last updated 19 days ago
- >2-
ββ βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β
β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-06-19
18:12:42.066053 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-06-19
18:12:42.066055 β
ββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
βββββββββββββββββββββββββ―ββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββββββββββΌββββββββββββ¨
β region β us-east-1 β
β ββββββββββββββββββββββββΌββββββββββββ¨
β aws_access_key_id β [HIDDEN] β
β ββββββββββββββββββββββββΌββββββββββββ¨
β aws_secret_access_key β [HIDDEN] β
βββββββββββββββββββββββββ·ββββββββββββ
AWS Secret Key
Long-lived AWS credentials consisting of an AWS access key ID and secret
access key associated with an AWS IAM user or AWS account root user (not
recommended).
This method is preferred during development and testing due to its
simplicity and ease of use. It is not recommended as a direct
authentication method for production use cases because the clients have
direct access to long-lived credentials and are granted the full set of
permissions of the IAM user or AWS account root user associated with the
credentials. For production, it is recommended to use the AWS IAM Role,
AWS Session Token, or AWS Federation Token authentication method
instead.
An AWS region is required and the connector may only be used to access
AWS resources in the specified region.
If you already have the local AWS CLI set up with these credentials,
they will be automatically picked up when auto-configuration is used
(see the example below).
- source_sentence: >-
Can you explain how the `query_similar_docs` function handles document
reranking?
sentences:
- >-
ry_similar_docs(
question: str,
url_ending: str,use_reranking: bool = False,
returned_sample_size: int = 5,
) -> Tuple[str, str, List[str]]:
"""Query similar documents for a given question and URL ending."""
embedded_question = get_embeddings(question)
db_conn = get_db_conn()
num_docs = 20 if use_reranking else returned_sample_size
top_similar_docs = get_topn_similar_docs(
embedded_question, db_conn, n=num_docs, include_metadata=True
if use_reranking:
reranked_docs_and_urls = rerank_documents(question, top_similar_docs)[
:returned_sample_size
urls = [doc[1] for doc in reranked_docs_and_urls]
else:
urls = [doc[1] for doc in top_similar_docs]
return (question, url_ending, urls)
We get the embeddings for the question being passed into the function
and connect to our PostgreSQL database. If we're using reranking, we get
the top 20 documents similar to our query and rerank them using the
rerank_documents helper function. We then extract the URLs from the
reranked documents and return them. Note that we only return 5 URLs, but
in the case of reranking we get a larger number of documents and URLs
back from the database to pass to our reranker, but in the end we always
choose the top five reranked documents to return.
Now that we've added reranking to our pipeline, we can evaluate the
performance of our reranker and see how it affects the quality of the
retrieved documents.
Code Example
To explore the full code, visit the Complete Guide repository and for
this section, particularly the eval_retrieval.py file.
PreviousUnderstanding reranking
NextEvaluating reranking performance
Last updated 15 days ago
- >-
uter vision that expect a single dataset as input.model drift checks
require two datasets and a mandatory model as input. This list includes
a subset of the model evaluation checks provided by Deepchecks for
tabular data and for computer vision that expect two datasets as input:
target and reference.
This structure is directly reflected in how Deepchecks can be used with
ZenML: there are four different Deepchecks standard steps and four
different ZenML enums for Deepchecks checks . The Deepchecks Data
Validator API is also modeled to reflect this same structure.
A notable characteristic of Deepchecks is that you don't need to
customize the set of Deepchecks tests that are part of a test suite.
Both ZenML and Deepchecks provide sane defaults that will run all
available Deepchecks tests in a given category with their default
conditions if a custom list of tests and conditions are not provided.
There are three ways you can use Deepchecks in your ZenML pipelines that
allow different levels of flexibility:
instantiate, configure and insert one or more of the standard Deepchecks
steps shipped with ZenML into your pipelines. This is the easiest way
and the recommended approach, but can only be customized through the
supported step configuration parameters.
call the data validation methods provided by the Deepchecks Data
Validator in your custom step implementation. This method allows for
more flexibility concerning what can happen in the pipeline step, but
you are still limited to the functionality implemented in the Data
Validator.
use the Deepchecks library directly in your custom step implementation.
This gives you complete freedom in how you are using Deepchecks'
features.
You can visualize Deepchecks results in Jupyter notebooks or view them
directly in the ZenML dashboard.
Warning! Usage in remote orchestrators
- >2-
use for the database connection.
database_ssl_ca:# The path to the client SSL certificate to use for the
database connection.
database_ssl_cert:
database_ssl_key:
database_ssl_verify_server_cert:
Run the deploy command and pass the config file above to it.Copyzenml
deploy --config=/PATH/TO/FILENote To be able to run the deploy command,
you should have your cloud provider's CLI configured locally with
permissions to create resources like MySQL databases and networks.
Configuration file templates
Base configuration file
Below is the general structure of a config file. Use this as a base and
then add any cloud-specific parameters from the sections below.
name:
provider:
kubectl_config_path:
namespace: zenmlserver
helm_chart:
zenmlserver_image_repo: zenmldocker/zenml
zenmlserver_image_tag: latest
deployment.
create_ingress_controller: true
ingress_tls: true
ingress_tls_generate_certs: true
ingress_tls_secret_name: zenml-tls-certs
on a subdomain of this IP. For AWS, if you have a hostname instead, use
the following command to get the IP address: `dig +short <hostname>`.
ingress_controller_ip:
deploy_db: true
model-index:
- name: strickvl/finetuned-all-MiniLM-L6-v2
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 384
type: dim_384
metrics:
- type: cosine_accuracy@1
value: 0.30120481927710846
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5421686746987951
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6746987951807228
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7409638554216867
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.30120481927710846
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18072289156626503
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.13493975903614455
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07409638554216866
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.30120481927710846
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5421686746987951
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6746987951807228
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7409638554216867
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5191955019858888
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.44787244214955063
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4579267717676669
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.29518072289156627
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5301204819277109
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6325301204819277
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7349397590361446
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.29518072289156627
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.17670682730923695
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12650602409638553
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07349397590361445
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.29518072289156627
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5301204819277109
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6325301204819277
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7349397590361446
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5118888198675068
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4409805890227577
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.45029464689656734
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.2710843373493976
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5120481927710844
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6144578313253012
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6987951807228916
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2710843373493976
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1706827309236948
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12289156626506023
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06987951807228915
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2710843373493976
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5120481927710844
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6144578313253012
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6987951807228916
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4883715088201252
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4208237712755786
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4307910346351659
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.25301204819277107
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4578313253012048
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5542168674698795
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6566265060240963
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.25301204819277107
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15261044176706828
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1108433734939759
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06566265060240963
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.25301204819277107
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4578313253012048
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5542168674698795
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6566265060240963
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4465853836525359
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.380495792694588
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.39060460620612997
name: Cosine Map@100
strickvl/finetuned-all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("strickvl/finetuned-all-MiniLM-L6-v2")
sentences = [
'Can you explain how the `query_similar_docs` function handles document reranking?',
'ry_similar_docs(\n\nquestion: str,\n\nurl_ending: str,use_reranking: bool = False,\n\nreturned_sample_size: int = 5,\n\n) -> Tuple[str, str, List[str]]:\n\n"""Query similar documents for a given question and URL ending."""\n\nembedded_question = get_embeddings(question)\n\ndb_conn = get_db_conn()\n\nnum_docs = 20 if use_reranking else returned_sample_size\n\n# get (content, url) tuples for the top n similar documents\n\ntop_similar_docs = get_topn_similar_docs(\n\nembedded_question, db_conn, n=num_docs, include_metadata=True\n\nif use_reranking:\n\nreranked_docs_and_urls = rerank_documents(question, top_similar_docs)[\n\n:returned_sample_size\n\nurls = [doc[1] for doc in reranked_docs_and_urls]\n\nelse:\n\nurls = [doc[1] for doc in top_similar_docs] # Unpacking URLs\n\nreturn (question, url_ending, urls)\n\nWe get the embeddings for the question being passed into the function and connect to our PostgreSQL database. If we\'re using reranking, we get the top 20 documents similar to our query and rerank them using the rerank_documents helper function. We then extract the URLs from the reranked documents and return them. Note that we only return 5 URLs, but in the case of reranking we get a larger number of documents and URLs back from the database to pass to our reranker, but in the end we always choose the top five reranked documents to return.\n\nNow that we\'ve added reranking to our pipeline, we can evaluate the performance of our reranker and see how it affects the quality of the retrieved documents.\n\nCode Example\n\nTo explore the full code, visit the Complete Guide repository and for this section, particularly the eval_retrieval.py file.\n\nPreviousUnderstanding reranking\n\nNextEvaluating reranking performance\n\nLast updated 15 days ago',
" use for the database connection.\ndatabase_ssl_ca:# The path to the client SSL certificate to use for the database connection.\ndatabase_ssl_cert:\n\n# The path to the client SSL key to use for the database connection.\ndatabase_ssl_key:\n\n# Whether to verify the database server SSL certificate.\ndatabase_ssl_verify_server_cert:\n\nRun the deploy command and pass the config file above to it.Copyzenml deploy --config=/PATH/TO/FILENote To be able to run the deploy command, you should have your cloud provider's CLI configured locally with permissions to create resources like MySQL databases and networks.\n\nConfiguration file templates\n\nBase configuration file\n\nBelow is the general structure of a config file. Use this as a base and then add any cloud-specific parameters from the sections below.\n\n# Name of the server deployment.\n\nname:\n\n# The server provider type, one of aws, gcp or azure.\n\nprovider:\n\n# The path to the kubectl config file to use for deployment.\n\nkubectl_config_path:\n\n# The Kubernetes namespace to deploy the ZenML server to.\n\nnamespace: zenmlserver\n\n# The path to the ZenML server helm chart to use for deployment.\n\nhelm_chart:\n\n# The repository and tag to use for the ZenML server Docker image.\n\nzenmlserver_image_repo: zenmldocker/zenml\n\nzenmlserver_image_tag: latest\n\n# Whether to deploy an nginx ingress controller as part of the deployment.\n\ncreate_ingress_controller: true\n\n# Whether to use TLS for the ingress.\n\ningress_tls: true\n\n# Whether to generate self-signed TLS certificates for the ingress.\n\ningress_tls_generate_certs: true\n\n# The name of the Kubernetes secret to use for the ingress.\n\ningress_tls_secret_name: zenml-tls-certs\n\n# The ingress controller's IP address. The ZenML server will be exposed on a subdomain of this IP. For AWS, if you have a hostname instead, use the following command to get the IP address: `dig +short <hostname>`.\n\ningress_controller_ip:\n\n# Whether to create a SQL database service as part of the recipe.\n\ndeploy_db: true\n\n# The username and password for the database.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.3012 |
cosine_accuracy@3 |
0.5422 |
cosine_accuracy@5 |
0.6747 |
cosine_accuracy@10 |
0.741 |
cosine_precision@1 |
0.3012 |
cosine_precision@3 |
0.1807 |
cosine_precision@5 |
0.1349 |
cosine_precision@10 |
0.0741 |
cosine_recall@1 |
0.3012 |
cosine_recall@3 |
0.5422 |
cosine_recall@5 |
0.6747 |
cosine_recall@10 |
0.741 |
cosine_ndcg@10 |
0.5192 |
cosine_mrr@10 |
0.4479 |
cosine_map@100 |
0.4579 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.2952 |
cosine_accuracy@3 |
0.5301 |
cosine_accuracy@5 |
0.6325 |
cosine_accuracy@10 |
0.7349 |
cosine_precision@1 |
0.2952 |
cosine_precision@3 |
0.1767 |
cosine_precision@5 |
0.1265 |
cosine_precision@10 |
0.0735 |
cosine_recall@1 |
0.2952 |
cosine_recall@3 |
0.5301 |
cosine_recall@5 |
0.6325 |
cosine_recall@10 |
0.7349 |
cosine_ndcg@10 |
0.5119 |
cosine_mrr@10 |
0.441 |
cosine_map@100 |
0.4503 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.2711 |
cosine_accuracy@3 |
0.512 |
cosine_accuracy@5 |
0.6145 |
cosine_accuracy@10 |
0.6988 |
cosine_precision@1 |
0.2711 |
cosine_precision@3 |
0.1707 |
cosine_precision@5 |
0.1229 |
cosine_precision@10 |
0.0699 |
cosine_recall@1 |
0.2711 |
cosine_recall@3 |
0.512 |
cosine_recall@5 |
0.6145 |
cosine_recall@10 |
0.6988 |
cosine_ndcg@10 |
0.4884 |
cosine_mrr@10 |
0.4208 |
cosine_map@100 |
0.4308 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.253 |
cosine_accuracy@3 |
0.4578 |
cosine_accuracy@5 |
0.5542 |
cosine_accuracy@10 |
0.6566 |
cosine_precision@1 |
0.253 |
cosine_precision@3 |
0.1526 |
cosine_precision@5 |
0.1108 |
cosine_precision@10 |
0.0657 |
cosine_recall@1 |
0.253 |
cosine_recall@3 |
0.4578 |
cosine_recall@5 |
0.5542 |
cosine_recall@10 |
0.6566 |
cosine_ndcg@10 |
0.4466 |
cosine_mrr@10 |
0.3805 |
cosine_map@100 |
0.3906 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 1,490 training samples
- Columns:
positive
and anchor
- Approximate statistics based on the first 1000 samples:
|
positive |
anchor |
type |
string |
string |
details |
- min: 9 tokens
- mean: 21.12 tokens
- max: 49 tokens
|
- min: 21 tokens
- mean: 240.72 tokens
- max: 256 tokens
|
- Samples:
positive |
anchor |
Can you provide the details for the Azure service principal with the ID 273d2812-2643-4446-82e6-6098b8ccdaa4? |
ββ βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β 273d2812-2643-4446-82e6-6098b8ccdaa4 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β azure-service-principal β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β π¦ azure β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β service-principal β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ azure-generic, π¦ blob-container, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β 50d9f230-c4ea-400e-b2d7-6b52ba2a6f90 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ |
What are the new features introduced in ZenML 0.20.0 regarding the Metadata Store? |
ed to update the way they are registered in ZenML.the updated ZenML server provides a new and improved collaborative experience. When connected to a ZenML server, you can now share your ZenML Stacks and Stack Components with other users. If you were previously using the ZenML Profiles or the ZenML server to share your ZenML Stacks, you should switch to the new ZenML server and Dashboard and update your existing workflows to reflect the new features.
ZenML takes over the Metadata Store role
ZenML can now run as a server that can be accessed via a REST API and also comes with a visual user interface (called the ZenML Dashboard). This server can be deployed in arbitrary environments (local, on-prem, via Docker, on AWS, GCP, Azure etc.) and supports user management, workspace scoping, and more.
The release introduces a series of commands to facilitate managing the lifecycle of the ZenML server and to access the pipeline and pipeline run information:
zenml connect / disconnect / down / up / logs / status can be used to configure your client to connect to a ZenML server, to start a local ZenML Dashboard or to deploy a ZenML server to a cloud environment. For more information on how to use these commands, see the ZenML deployment documentation.
zenml pipeline list / runs / delete can be used to display information and about and manage your pipelines and pipeline runs.
In ZenML 0.13.2 and earlier versions, information about pipelines and pipeline runs used to be stored in a separate stack component called the Metadata Store. Starting with 0.20.0, the role of the Metadata Store is now taken over by ZenML itself. This means that the Metadata Store is no longer a separate component in the ZenML architecture, but rather a part of the ZenML core, located wherever ZenML is deployed: locally on your machine or running remotely as a server. |
Which environment variables should I set to use the Azure Service Connector authentication method in ZenML? |
-client-id","client_secret": "my-client-secret"}).Note: The remaining configuration options are deprecated and may be removed in a future release. Instead, you should set the ZENML_SECRETS_STORE_AUTH_METHOD and ZENML_SECRETS_STORE_AUTH_CONFIG variables to use the Azure Service Connector authentication method.
ZENML_SECRETS_STORE_AZURE_CLIENT_ID: The Azure application service principal client ID to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
ZENML_SECRETS_STORE_AZURE_CLIENT_SECRET: The Azure application service principal client secret to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
ZENML_SECRETS_STORE_AZURE_TENANT_ID: The Azure application service principal tenant ID to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
These configuration options are only relevant if you're using Hashicorp Vault as the secrets store backend.
ZENML_SECRETS_STORE_TYPE: Set this to hashicorp in order to set this type of secret store.
ZENML_SECRETS_STORE_VAULT_ADDR: The URL of the HashiCorp Vault server to connect to. NOTE: this is the same as setting the VAULT_ADDR environment variable.
ZENML_SECRETS_STORE_VAULT_TOKEN: The token to use to authenticate with the HashiCorp Vault server. NOTE: this is the same as setting the VAULT_TOKEN environment variable.
ZENML_SECRETS_STORE_VAULT_NAMESPACE: The Vault Enterprise namespace. Not required for Vault OSS. NOTE: this is the same as setting the VAULT_NAMESPACE environment variable. |
- Loss:
MatryoshkaLoss
with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epoch
per_device_train_batch_size
: 32
per_device_eval_batch_size
: 16
gradient_accumulation_steps
: 16
learning_rate
: 2e-05
num_train_epochs
: 4
lr_scheduler_type
: cosine
warmup_ratio
: 0.1
bf16
: True
tf32
: True
load_best_model_at_end
: True
optim
: adamw_torch_fused
batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: False
do_predict
: False
eval_strategy
: epoch
prediction_loss_only
: True
per_device_train_batch_size
: 32
per_device_eval_batch_size
: 16
per_gpu_train_batch_size
: None
per_gpu_eval_batch_size
: None
gradient_accumulation_steps
: 16
eval_accumulation_steps
: None
learning_rate
: 2e-05
weight_decay
: 0.0
adam_beta1
: 0.9
adam_beta2
: 0.999
adam_epsilon
: 1e-08
max_grad_norm
: 1.0
num_train_epochs
: 4
max_steps
: -1
lr_scheduler_type
: cosine
lr_scheduler_kwargs
: {}
warmup_ratio
: 0.1
warmup_steps
: 0
log_level
: passive
log_level_replica
: warning
log_on_each_node
: True
logging_nan_inf_filter
: True
save_safetensors
: True
save_on_each_node
: False
save_only_model
: False
restore_callback_states_from_checkpoint
: False
no_cuda
: False
use_cpu
: False
use_mps_device
: False
seed
: 42
data_seed
: None
jit_mode_eval
: False
use_ipex
: False
bf16
: True
fp16
: False
fp16_opt_level
: O1
half_precision_backend
: auto
bf16_full_eval
: False
fp16_full_eval
: False
tf32
: True
local_rank
: 0
ddp_backend
: None
tpu_num_cores
: None
tpu_metrics_debug
: False
debug
: []
dataloader_drop_last
: False
dataloader_num_workers
: 0
dataloader_prefetch_factor
: None
past_index
: -1
disable_tqdm
: True
remove_unused_columns
: True
label_names
: None
load_best_model_at_end
: True
ignore_data_skip
: False
fsdp
: []
fsdp_min_num_params
: 0
fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap
: None
accelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed
: None
label_smoothing_factor
: 0.0
optim
: adamw_torch_fused
optim_args
: None
adafactor
: False
group_by_length
: False
length_column_name
: length
ddp_find_unused_parameters
: None
ddp_bucket_cap_mb
: None
ddp_broadcast_buffers
: False
dataloader_pin_memory
: True
dataloader_persistent_workers
: False
skip_memory_metrics
: True
use_legacy_prediction_loop
: False
push_to_hub
: False
resume_from_checkpoint
: None
hub_model_id
: None
hub_strategy
: every_save
hub_private_repo
: False
hub_always_push
: False
gradient_checkpointing
: False
gradient_checkpointing_kwargs
: None
include_inputs_for_metrics
: False
eval_do_concat_batches
: True
fp16_backend
: auto
push_to_hub_model_id
: None
push_to_hub_organization
: None
mp_parameters
:
auto_find_batch_size
: False
full_determinism
: False
torchdynamo
: None
ray_scope
: last
ddp_timeout
: 1800
torch_compile
: False
torch_compile_backend
: None
torch_compile_mode
: None
dispatch_batches
: None
split_batches
: None
include_tokens_per_second
: False
include_num_input_tokens_seen
: False
neftune_noise_alpha
: None
optim_target_modules
: None
batch_eval_metrics
: False
batch_sampler
: no_duplicates
multi_dataset_batch_sampler
: proportional
Training Logs
Epoch |
Step |
dim_128_cosine_map@100 |
dim_256_cosine_map@100 |
dim_384_cosine_map@100 |
dim_64_cosine_map@100 |
0.6667 |
1 |
0.3800 |
0.3986 |
0.4149 |
0.3471 |
2.0 |
3 |
0.4194 |
0.4473 |
0.4557 |
0.3762 |
2.6667 |
4 |
0.4308 |
0.4503 |
0.4579 |
0.3906 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}