dwb2023's picture
Add new SentenceTransformer model
c6203d4 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:157
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: >-
      Why does the author recommend reading the first few pages of the 69-page
      PDF document related to the lawsuit?
    sentences:
      - >-
        We don’t yet know how to build GPT-4

        Frustratingly, despite the enormous leaps ahead we’ve had this year, we
        are yet to see an alternative model that’s better than GPT-4.

        OpenAI released GPT-4 in March, though it later turned out we had a
        sneak peak of it in February when Microsoft used it as part of the new
        Bing.

        This may well change in the next few weeks: Google’s Gemini Ultra has
        big claims, but isn’t yet available for us to try out.

        The team behind Mistral are working to beat GPT-4 as well, and their
        track record is already extremely strong considering their first public
        model only came out in September, and they’ve released two significant
        improvements since then.
      - >-
        Just this week, the New York Times launched a landmark lawsuit against
        OpenAI and Microsoft over this issue. The 69 page PDF is genuinely worth
        reading—especially the first few pages, which lay out the issues in a
        way that’s surprisingly easy to follow. The rest of the document
        includes some of the clearest explanations of what LLMs are, how they
        work and how they are built that I’ve read anywhere.

        The legal arguments here are complex. I’m not a lawyer, but I don’t
        think this one will be easily decided. Whichever way it goes, I expect
        this case to have a profound impact on how this technology develops in
        the future.
      - >-
        Nothing yet from Anthropic or Meta but I would be very surprised if they
        don’t have their own inference-scaling models in the works. Meta
        published a relevant paper Training Large Language Models to Reason in a
        Continuous Latent Space in December.

        Was the best currently available LLM trained in China for less than $6m?

        Not quite, but almost! It does make for a great attention-grabbing
        headline.

        The big news to end the year was the release of DeepSeek v3—dropped on
        Hugging Face on Christmas Day without so much as a README file, then
        followed by documentation and a paper the day after that.
  - source_sentence: Why does the author find the term “agents” frustrating?
    sentences:
      - >-
        Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks
        about Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model!


        I can now run a GPT-4 class model on my laptop talks about running
        Meta’s Llama 3.3 70B (released in December)
      - >-
        “Agents” still haven’t really happened yet

        I find the term “agents” extremely frustrating. It lacks a single, clear
        and widely understood meaning... but the people who use the term never
        seem to acknowledge that.

        If you tell me that you are building “agents”, you’ve conveyed almost no
        information to me at all. Without reading your mind I have no way of
        telling which of the dozens of possible definitions you are talking
        about.
      - >-
        Terminology aside, I remain skeptical as to their utility based, once
        again, on the challenge of gullibility. LLMs believe anything you tell
        them. Any systems that attempts to make meaningful decisions on your
        behalf will run into the same roadblock: how good is a travel agent, or
        a digital assistant, or even a research tool if it can’t distinguish
        truth from fiction?

        Just the other day Google Search was caught serving up an entirely fake
        description of the non-existant movie “Encanto 2”. It turned out to be
        summarizing an imagined movie listing from a fan fiction wiki.
  - source_sentence: Which company released the QwQ model under an Apache 20 license?
    sentences:
      - >-
        Embeddings: What they are and why they matter

        61.7k

        79.3k



        Catching up on the weird world of LLMs

        61.6k

        85.9k



        llamafile is the new best way to run an LLM on your own computer

        52k

        66k



        Prompt injection explained, with video, slides, and a transcript

        51k

        61.9k



        AI-enhanced development makes me more ambitious with my projects

        49.6k

        60.1k



        Understanding GPT tokenizers

        49.5k

        61.1k



        Exploring GPTs: ChatGPT in a trench coat?

        46.4k

        58.5k



        Could you train a ChatGPT-beating model for $85,000 and run it in a
        browser?

        40.5k

        49.2k



        How to implement Q&A against your documentation with GPT3, embeddings
        and Datasette

        37.3k

        44.9k



        Lawyer cites fake cases invented by ChatGPT, judge is not amused

        37.1k

        47.4k
      - >-
        OpenAI are not the only game in town here. Google released their first
        entrant in the category, gemini-2.0-flash-thinking-exp, on December
        19th.

        Alibaba’s Qwen team released their QwQ model on November 28th—under an
        Apache 2.0 license, and that one I could run on my own machine. They
        followed that up with a vision reasoning model called QvQ on December
        24th, which I also ran locally.

        DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out
        through their chat interface on November 20th.

        To understand more about inference scaling I recommend Is AI progress
        slowing down? by Arvind Narayanan and Sayash Kapoor.
      - >-
        Against this photo of butterflies at the California Academy of Sciences:



        A shallow dish, likely a hummingbird or butterfly feeder, is red. 
        Pieces of orange slices of fruit are visible inside the dish.

        Two butterflies are positioned in the feeder, one is a dark brown/black
        butterfly with white/cream-colored markings.  The other is a large,
        brown butterfly with patterns of lighter brown, beige, and black
        markings, including prominent eye spots. The larger brown butterfly
        appears to be feeding on the fruit.
  - source_sentence: >-
      How does the 2024 review of Large Language Models build upon the insights
      from the 2023 review?
    sentences:
      - >-
        Law is not ethics. Is it OK to train models on people’s content without
        their permission, when those models will then be used in ways that
        compete with those people?

        As the quality of results produced by AI models has increased over the
        year, these questions have become even more pressing.

        The impact on human society in terms of these models is already huge, if
        difficult to objectively measure.

        People have certainly lost work to them—anecdotally, I’ve seen this for
        copywriters, artists and translators.

        There are a great deal of untold stories here. I’m hoping 2024 sees
        significant amounts of dedicated journalism on this topic.

        My blog in 2023

        Here’s a tag cloud for content I posted to my blog in 2023 (generated
        using Django SQL Dashboard):
      - >-
        The GPT-4 barrier was comprehensively broken

        In my December 2023 review I wrote about how We don’t yet know how to
        build GPT-4—OpenAI’s best model was almost a year old at that point, yet
        no other AI lab had produced anything better. What did OpenAI know that
        the rest of us didn’t?

        I’m relieved that this has changed completely in the past twelve months.
        18 organizations now have models on the Chatbot Arena Leaderboard that
        rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the
        board)—70 models in total.
      - >-
        Things we learned about LLMs in 2024






















        Simon Willison’s Weblog

        Subscribe







        Things we learned about LLMs in 2024

        31st December 2024

        A lot has happened in the world of Large Language Models over the course
        of 2024. Here’s a review of things we figured out about the field in the
        past twelve months, plus my attempt at identifying key themes and
        pivotal moments.

        This is a sequel to my review of 2023.

        In this article:
  - source_sentence: >-
      What is the challenge in building AI personal assistants based on the
      gullibility of language models?
    sentences:
      - >-
        Language Models are gullible. They “believe” what we tell them—what’s in
        their training data, then what’s in the fine-tuning data, then what’s in
        the prompt.

        In order to be useful tools for us, we need them to believe what we feed
        them!

        But it turns out a lot of the things we want to build need them not to
        be gullible.

        Everyone wants an AI personal assistant. If you hired a real-world
        personal assistant who believed everything that anyone told them, you
        would quickly find that their ability to positively impact your life was
        severely limited.
      - |-
        Large Language Models
        They’re actually quite easy to build
        You can run LLMs on your own devices
        Hobbyists can build their own fine-tuned models
        We don’t yet know how to build GPT-4
        Vibes Based Development
        LLMs are really smart, and also really, really dumb
        Gullibility is the biggest unsolved problem
        Code may be the best application
        The ethics of this space remain diabolically complex
        My blog in 2023
      - >-
        These price drops are driven by two factors: increased competition and
        increased efficiency. The efficiency thing is really important for
        everyone who is concerned about the environmental impact of LLMs. These
        price drops tie directly to how much energy is being used for running
        prompts.

        There’s still plenty to worry about with respect to the environmental
        impact of the great AI datacenter buildout, but a lot of the concerns
        over the energy cost of individual prompts are no longer credible.

        Here’s a fun napkin calculation: how much would it cost to generate
        short descriptions of every one of the 68,000 photos in my personal
        photo library using Google’s Gemini 1.5 Flash 8B (released in October),
        their cheapest model?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.9583333333333334
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.9583333333333334
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.9583333333333334
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9846220730654774
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9791666666666666
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9791666666666666
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("dwb2023/legal-ft-794455c7-1bee-466a-8110-133f086ed907")
# Run inference
sentences = [
    'What is the challenge in building AI personal assistants based on the gullibility of language models?',
    'Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.\nIn order to be useful tools for us, we need them to believe what we feed them!\nBut it turns out a lot of the things we want to build need them not to be gullible.\nEveryone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.',
    'These price drops are driven by two factors: increased competition and increased efficiency. The efficiency thing is really important for everyone who is concerned about the environmental impact of LLMs. These price drops tie directly to how much energy is being used for running prompts.\nThere’s still plenty to worry about with respect to the environmental impact of the great AI datacenter buildout, but a lot of the concerns over the energy cost of individual prompts are no longer credible.\nHere’s a fun napkin calculation: how much would it cost to generate short descriptions of every one of the 68,000 photos in my personal photo library using Google’s Gemini 1.5 Flash 8B (released in October), their cheapest model?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9583
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.9583
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.9583
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9846
cosine_mrr@10 0.9792
cosine_map@100 0.9792

Training Details

Training Dataset

Unnamed Dataset

  • Size: 157 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 157 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 2 tokens
    • mean: 20.94 tokens
    • max: 37 tokens
    • min: 43 tokens
    • mean: 135.72 tokens
    • max: 214 tokens
  • Samples:
    sentence_0 sentence_1
    What was the typical context length accepted by most models last year? Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.
    How many tokens can Google’s Gemini series accept in 2024? Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.
    What are the new capabilities introduced by Google’s Gemini 15 Pro? The earliest of those was Google’s Gemini 1.5 Pro, released in February. In addition to producing GPT-4 level outputs, it introduced several brand new capabilities to the field—most notably its 1 million (and then later 2 million) token input context length, and the ability to input video.
    I wrote about this at the time in The killer app of Gemini Pro 1.5 is video, which earned me a short appearance as a talking head in the Google I/O opening keynote in May.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 16 0.9638
2.0 32 0.9484
3.0 48 0.9484
3.125 50 0.9484
4.0 64 0.9539
5.0 80 0.9692
6.0 96 0.9692
6.25 100 0.9692
7.0 112 0.9692
8.0 128 0.9846
9.0 144 0.9846
9.375 150 0.9846
10.0 160 0.9846

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}