SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("northstaranlyticsma24/artic_ft_midterm")
# Run inference
sentences = [
'What topics were discussed during the meetings related to the development of the Blueprint for an AI Bill of Rights?',
'SECTION: LISTENING TO THE AMERICAN PEOPLE\nAPPENDIX\n• OSTP conducted meetings with a variety of stakeholders in the private sector and civil society. Some of these\nmeetings were specifically focused on providing ideas related to the development of the Blueprint for an AI\nBill of Rights while others provided useful general context on the positive use cases, potential harms, and/or\noversight possibilities for these technologies.',
' \nGAI systems can produce content that is inciting, radicalizing, or threatening, or that glorifies violence, \nwith greater ease and scale than other technologies. LLMs have been reported to generate dangerous or \nviolent recommendations, and some models have generated actionable instructions for dangerous or \n \n \n9 Confabulations of falsehoods are most commonly a problem for text-based outputs; for audio, image, or video \ncontent, creative generation of non-factual content can be a desired behavior. 10 For example, legal confabulations have been shown to be pervasive in current state-of-the-art LLMs. See also, \ne.g., \n \n7 \nunethical behavior.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.7609 |
cosine_accuracy@3 | 0.8696 |
cosine_accuracy@5 | 0.913 |
cosine_accuracy@10 | 0.9783 |
cosine_precision@1 | 0.7609 |
cosine_precision@3 | 0.2899 |
cosine_precision@5 | 0.1826 |
cosine_precision@10 | 0.0978 |
cosine_recall@1 | 0.7609 |
cosine_recall@3 | 0.8696 |
cosine_recall@5 | 0.913 |
cosine_recall@10 | 0.9783 |
cosine_ndcg@10 | 0.8567 |
cosine_mrr@10 | 0.819 |
cosine_map@100 | 0.8204 |
dot_accuracy@1 | 0.7609 |
dot_accuracy@3 | 0.8696 |
dot_accuracy@5 | 0.913 |
dot_accuracy@10 | 0.9783 |
dot_precision@1 | 0.7609 |
dot_precision@3 | 0.2899 |
dot_precision@5 | 0.1826 |
dot_precision@10 | 0.0978 |
dot_recall@1 | 0.7609 |
dot_recall@3 | 0.8696 |
dot_recall@5 | 0.913 |
dot_recall@10 | 0.9783 |
dot_ndcg@10 | 0.8567 |
dot_mrr@10 | 0.819 |
dot_map@100 | 0.8204 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 363 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 363 samples:
sentence_0 sentence_1 type string string details - min: 2 tokens
- mean: 20.1 tokens
- max: 36 tokens
- min: 2 tokens
- mean: 228.97 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 What are the five principles outlined in the Blueprint for an AI Bill of Rights intended to protect against?
SECTION: USING THIS TECHNICAL COMPANION
-
USING THIS TECHNICAL COMPANION
The Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design,
use, and deployment of automated systems to protect the rights of the American public in the age of artificial
intelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and
provides examples and concrete steps for communities, industry, governments, and others to take in order to
build these protections into policy, practice, or the technological design process. Taken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help
guard the American public against many of the potential and actual harms identified by researchers, technolo
gists, advocates, journalists, policymakers, and communities in the United States and around the world. This
technical companion is intended to be used as a reference by people across many circumstances – anyone
impacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to
govern the use of an automated system. Each principle is accompanied by three supplemental sections:
1
2
WHY THIS PRINCIPLE IS IMPORTANT:
This section provides a brief summary of the problems that the principle seeks to address and protect against, including
illustrative examples. WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS:
• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical
standards and practices that should be tailored for particular sectors and contexts. • This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The
expectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing
monitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer
concrete directions for how those changes can be made. • Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can
be provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should
be made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law
enforcement, or national security considerations may prevent public release. Where public reports are not possible, the
information should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard
ing individuals’ rights. These reporting expectations are important for transparency, so the American people can have
confidence that their rights, opportunities, and access as well as their expectations about technologies are respected. 3
HOW THESE PRINCIPLES CAN MOVE INTO PRACTICE:
This section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. It describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. The examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help
provide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these
processes require the cooperation of and collaboration among industry, civil society, researchers, policymakers,
technologists, and the public.How does the technical companion suggest that automated systems should be monitored and reported on to ensure transparency and protect individual rights?
SECTION: USING THIS TECHNICAL COMPANION
-
USING THIS TECHNICAL COMPANION
The Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design,
use, and deployment of automated systems to protect the rights of the American public in the age of artificial
intelligence. This technical companion considers each principle in the Blueprint for an AI Bill of Rights and
provides examples and concrete steps for communities, industry, governments, and others to take in order to
build these protections into policy, practice, or the technological design process. Taken together, the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help
guard the American public against many of the potential and actual harms identified by researchers, technolo
gists, advocates, journalists, policymakers, and communities in the United States and around the world. This
technical companion is intended to be used as a reference by people across many circumstances – anyone
impacted by automated systems, and anyone developing, designing, deploying, evaluating, or making policy to
govern the use of an automated system. Each principle is accompanied by three supplemental sections:
1
2
WHY THIS PRINCIPLE IS IMPORTANT:
This section provides a brief summary of the problems that the principle seeks to address and protect against, including
illustrative examples. WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS:
• The expectations for automated systems are meant to serve as a blueprint for the development of additional technical
standards and practices that should be tailored for particular sectors and contexts. • This section outlines practical steps that can be implemented to realize the vision of the Blueprint for an AI Bill of Rights. The
expectations laid out often mirror existing practices for technology development, including pre-deployment testing, ongoing
monitoring, and governance structures for automated systems, but also go further to address unmet needs for change and offer
concrete directions for how those changes can be made. • Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can
be provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should
be made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property, law
enforcement, or national security considerations may prevent public release. Where public reports are not possible, the
information should be provided to oversight bodies and privacy, civil liberties, or other ethics officers charged with safeguard
ing individuals’ rights. These reporting expectations are important for transparency, so the American people can have
confidence that their rights, opportunities, and access as well as their expectations about technologies are respected. 3
HOW THESE PRINCIPLES CAN MOVE INTO PRACTICE:
This section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. It describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. The examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help
provide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these
processes require the cooperation of and collaboration among industry, civil society, researchers, policymakers,
technologists, and the public.What is the significance of the number 14 in the given context?
14
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 20per_device_eval_batch_size
: 20num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 20per_device_eval_batch_size
: 20per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_map@100 |
---|---|---|
1.0 | 19 | 0.7434 |
2.0 | 38 | 0.7973 |
2.6316 | 50 | 0.8048 |
3.0 | 57 | 0.8048 |
4.0 | 76 | 0.8204 |
5.0 | 95 | 0.8204 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for northstaranlyticsma24/artic_ft_midterm
Base model
Snowflake/snowflake-arctic-embed-mEvaluation results
- Cosine Accuracy@1 on Unknownself-reported0.761
- Cosine Accuracy@3 on Unknownself-reported0.870
- Cosine Accuracy@5 on Unknownself-reported0.913
- Cosine Accuracy@10 on Unknownself-reported0.978
- Cosine Precision@1 on Unknownself-reported0.761
- Cosine Precision@3 on Unknownself-reported0.290
- Cosine Precision@5 on Unknownself-reported0.183
- Cosine Precision@10 on Unknownself-reported0.098
- Cosine Recall@1 on Unknownself-reported0.761
- Cosine Recall@3 on Unknownself-reported0.870