zenml/finetuned-snowflake-arctic-embed-m-v1.5
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("zenml/finetuned-snowflake-arctic-embed-m-v1.5")
# Run inference
sentences = [
'How does ZenML utilize type annotations in step outputs to enhance data handling between pipeline steps?',
'ποΈHandle Data/Artifacts\n\nStep outputs in ZenML are stored in the artifact store. This enables caching, lineage and auditability. Using type annotations helps with transparency, passing data between steps, and serializing/des\n\nFor best results, use type annotations for your outputs. This is good coding practice for transparency, helps ZenML handle passing data between steps, and also enables ZenML to serialize and deserialize (referred to as \'materialize\' in ZenML) the data.\n\n@step\ndef load_data(parameter: int) -> Dict[str, Any]:\n\n# do something with the parameter here\n\ntraining_data = [[1, 2], [3, 4], [5, 6]]\n labels = [0, 1, 0]\n return {\'features\': training_data, \'labels\': labels}\n\n@step\ndef train_model(data: Dict[str, Any]) -> None:\n total_features = sum(map(sum, data[\'features\']))\n total_labels = sum(data[\'labels\'])\n \n # Train some model here\n \n print(f"Trained model using {len(data[\'features\'])} data points. "\n f"Feature sum is {total_features}, label sum is {total_labels}")\n\n@pipeline \ndef simple_ml_pipeline(parameter: int):\n dataset = load_data(parameter=parameter) # Get the output \n train_model(dataset) # Pipe the previous step output into the downstream step\n\nIn this code, we define two steps: load_data and train_model. The load_data step takes an integer parameter and returns a dictionary containing training data and labels. The train_model step receives the dictionary from load_data, extracts the features and labels, and trains a model (not shown here).\n\nFinally, we define a pipeline simple_ml_pipeline that chains the load_data and train_model steps together. The output from load_data is passed as input to train_model, demonstrating how data flows between steps in a ZenML pipeline.\n\nPreviousDisable colorful loggingNextHow ZenML stores data\n\nLast updated 4 months ago',
" your GCP Image Builder to the GCP cloud platform.To set up the GCP Image Builder to authenticate to GCP and access the GCP Cloud Build services, it is recommended to leverage the many features provided by the GCP Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.\n\nIf you don't already have a GCP Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You also have the option to configure a GCP Service Connector that can be used to access more than just the GCP Cloud Build service:\n\nzenml service-connector register --type gcp -i\n\nA non-interactive CLI example that leverages the Google Cloud CLI configuration on your local machine to auto-configure a GCP Service Connector for the GCP Cloud Build service:\n\nzenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type gcp-generic --resource-name <GCS_BUCKET_NAME> --auto-configure\n\nExample Command Output\n\n$ zenml service-connector register gcp-generic --type gcp --resource-type gcp-generic --auto-configure\nSuccessfully registered service connector `gcp-generic` with access to the following resources:\nββββββββββββββββββ―βββββββββββββββββ\nβ RESOURCE TYPE β RESOURCE NAMES β\nβ βββββββββββββββββΌβββββββββββββββββ¨\nβ π΅ gcp-generic β zenml-core β\nββββββββββββββββββ·βββββββββββββββββ\n\nNote: Please remember to grant the entity associated with your GCP credentials permissions to access the Cloud Build API and to run Cloud Builder jobs (e.g. the Cloud Build Editor IAM role). The GCP Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.\n\nIf you already have one or more GCP Service Connectors configured in your ZenML deployment, you can check which of them can be used to access generic GCP resources like the GCP Image Builder required for your GCP Image Builder by running e.g.:",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
dim_384
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.75 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.75 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.75 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.875 |
cosine_mrr@10 | 0.8333 |
cosine_map@100 | 0.8333 |
Information Retrieval
- Dataset:
dim_256
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.75 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.75 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.75 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.875 |
cosine_mrr@10 | 0.8333 |
cosine_map@100 | 0.8333 |
Information Retrieval
- Dataset:
dim_128
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.75 |
cosine_accuracy@3 | 0.75 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.75 |
cosine_precision@3 | 0.25 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.75 |
cosine_recall@3 | 0.75 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.8577 |
cosine_mrr@10 | 0.8125 |
cosine_map@100 | 0.8125 |
Information Retrieval
- Dataset:
dim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.75 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.75 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.75 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.875 |
cosine_mrr@10 | 0.8333 |
cosine_map@100 | 0.8333 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 36 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 36 samples:
positive anchor type string string details - min: 13 tokens
- mean: 23.14 tokens
- max: 38 tokens
- min: 31 tokens
- mean: 311.83 tokens
- max: 512 tokens
- Samples:
positive anchor What are the necessary steps to deploy the KubernetesSparkStepOperator, and what configurations are required for running Spark on Kubernetes?
ator which runs Steps with Spark on Kubernetes."""def _backend_configuration(
self,
spark_config: SparkConf,
step_config: "StepConfiguration",
) -> None:
"""Configures Spark to run on Kubernetes."""
# Build and push the image
docker_image_builder = PipelineDockerImageBuilder()
image_name = docker_image_builder.build_and_push_docker_image(...)
# Adjust the spark configuration
spark_config.set("spark.kubernetes.container.image", image_name)
...
For Kubernetes, there are also some additional important configuration parameters:
namespace is the namespace under which the driver and executor pods will run.
service_account is the service account that will be used by various Spark components (to create and watch the pods).
Additionally, the _backend_configuration method is adjusted to handle the Kubernetes-specific configuration.
When to use it
You should use the Spark step operator:
when you are dealing with large amounts of data.
when you are designing a step that can benefit from distributed computing paradigms in terms of time and resources.
How to deploy it
To use the KubernetesSparkStepOperator you will need to setup a few things first:
Remote ZenML server: See the deployment guide for more information.
Kubernetes cluster: There are many ways to deploy a Kubernetes cluster using different cloud providers or on your custom infrastructure. For AWS, you can follow the Spark EKS Setup Guide below.
Spark EKS Setup Guide
The following guide will walk you through how to spin up and configure a Amazon Elastic Kubernetes Service with Spark on it:
EKS Kubernetes Cluster
Follow this guide to create an Amazon EKS cluster role.
Follow this guide to create an Amazon EC2 node role.
Go to the IAM website, and select Roles to edit both roles.
Attach the AmazonRDSFullAccess and AmazonS3FullAccess policies to both roles.
Go to the EKS website.
Make sure the correct region is selected on the top right.How do I set up a GCP Service Connector within ZenML to authenticate and access GCP Cloud Build services?
your GCP Image Builder to the GCP cloud platform.To set up the GCP Image Builder to authenticate to GCP and access the GCP Cloud Build services, it is recommended to leverage the many features provided by the GCP Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.
If you don't already have a GCP Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You also have the option to configure a GCP Service Connector that can be used to access more than just the GCP Cloud Build service:
zenml service-connector register --type gcp -i
A non-interactive CLI example that leverages the Google Cloud CLI configuration on your local machine to auto-configure a GCP Service Connector for the GCP Cloud Build service:
zenml service-connector register --type gcp --resource-type gcp-generic --resource-name --auto-configure
Example Command Output
$ zenml service-connector register gcp-generic --type gcp --resource-type gcp-generic --auto-configure
Successfully registered service connectorgcp-generic
with access to the following resources:
ββββββββββββββββββ―βββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββΌβββββββββββββββββ¨
β π΅ gcp-generic β zenml-core β
ββββββββββββββββββ·βββββββββββββββββ
Note: Please remember to grant the entity associated with your GCP credentials permissions to access the Cloud Build API and to run Cloud Builder jobs (e.g. the Cloud Build Editor IAM role). The GCP Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.
If you already have one or more GCP Service Connectors configured in your ZenML deployment, you can check which of them can be used to access generic GCP resources like the GCP Image Builder required for your GCP Image Builder by running e.g.:How do I register and activate a ZenML stack with a new GCP Image Builder while ensuring proper authentication?
build to finish. More information: Build Timeout.We can register the image builder and use it in our active stack:
zenml image-builder register <br> --flavor=gcp <br> --cloud_builder_image= <br> --network= <br> --build_timeout=
# Register and activate a stack with the new image builder
zenml stack register -i ... --set
You also need to set up authentication required to access the Cloud Build GCP services.
Authentication Methods
Integrating and using a GCP Image Builder in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Local Authentication method. However, the recommended way to authenticate to the GCP cloud platform is through a GCP Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the GCP Image Builder with other remote stack components also running in GCP.
This method uses the implicit GCP authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure a GCP Image Builder. You don't need to supply credentials explicitly when you register the GCP Image Builder, as it leverages the local credentials and configuration that the Google Cloud CLI stores on your local machine. However, you will need to install and set up the Google Cloud CLI on your machine as a prerequisite, as covered in the Google Cloud documentation , before you register the GCP Image Builder.
Stacks using the GCP Image Builder set up with local authentication are not portable across environments. To make ZenML pipelines fully portable, it is recommended to use a GCP Service Connector to authenticate your GCP Image Builder to the GCP cloud platform. - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 4per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1tf32
: Falseload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 4per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Falselocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Trueremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | dim_384_cosine_map@100 | dim_256_cosine_map@100 | dim_128_cosine_map@100 | dim_64_cosine_map@100 |
---|---|---|---|---|---|
1.0 | 1 | 0.8333 | 0.8333 | 0.8125 | 0.8333 |
2.0 | 3 | 0.8333 | 0.8333 | 0.8125 | 0.8333 |
3.0 | 4 | 0.8333 | 0.8333 | 0.8125 | 0.8333 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.2.0
- Transformers: 4.45.2
- PyTorch: 2.5.0+cu124
- Accelerate: 1.0.1
- Datasets: 3.0.1
- Tokenizers: 0.20.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 33
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for zenml/finetuned-snowflake-arctic-embed-m-v1.5
Base model
Snowflake/snowflake-arctic-embed-m-v1.5Evaluation results
- Cosine Accuracy@1 on dim 384self-reported0.750
- Cosine Accuracy@3 on dim 384self-reported1.000
- Cosine Accuracy@5 on dim 384self-reported1.000
- Cosine Accuracy@10 on dim 384self-reported1.000
- Cosine Precision@1 on dim 384self-reported0.750
- Cosine Precision@3 on dim 384self-reported0.333
- Cosine Precision@5 on dim 384self-reported0.200
- Cosine Precision@10 on dim 384self-reported0.100
- Cosine Recall@1 on dim 384self-reported0.750
- Cosine Recall@3 on dim 384self-reported1.000