metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: How does the size of DeepSeek v3 compare to Meta’s Llama 31 405B model?
sentences:
- >-
Terminology aside, I remain skeptical as to their utility based, once
again, on the challenge of gullibility. LLMs believe anything you tell
them. Any systems that attempts to make meaningful decisions on your
behalf will run into the same roadblock: how good is a travel agent, or
a digital assistant, or even a research tool if it can’t distinguish
truth from fiction?
Just the other day Google Search was caught serving up an entirely fake
description of the non-existant movie “Encanto 2”. It turned out to be
summarizing an imagined movie listing from a fan fiction wiki.
- >-
DeepSeek v3 is a huge 685B parameter model—one of the largest openly
licensed models currently available, significantly bigger than the
largest of Meta’s Llama series, Llama 3.1 405B.
Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka
the Chatbot Arena) currently rank it 7th, just behind the Gemini 2.0 and
OpenAI 4o/o1 models. This is by far the highest ranking openly licensed
model.
The really impressive thing about DeepSeek v3 is the training cost. The
model was trained on 2,788,000 H800 GPU hours at an estimated cost of
$5,576,000. Llama 3.1 405B trained 30,840,000 GPU hours—11x that used by
DeepSeek v3, for a model that benchmarks slightly worse.
- >-
Against this photo of butterflies at the California Academy of Sciences:
A shallow dish, likely a hummingbird or butterfly feeder, is red.
Pieces of orange slices of fruit are visible inside the dish.
Two butterflies are positioned in the feeder, one is a dark brown/black
butterfly with white/cream-colored markings. The other is a large,
brown butterfly with patterns of lighter brown, beige, and black
markings, including prominent eye spots. The larger brown butterfly
appears to be feeding on the fruit.
- source_sentence: >-
How does the author compare the difficulty of training an LLM to another
complex task?
sentences:
- >-
“Agents” still haven’t really happened yet
I find the term “agents” extremely frustrating. It lacks a single, clear
and widely understood meaning... but the people who use the term never
seem to acknowledge that.
If you tell me that you are building “agents”, you’ve conveyed almost no
information to me at all. Without reading your mind I have no way of
telling which of the dozens of possible definitions you are talking
about.
- >-
So training an LLM still isn’t something a hobbyist can afford, but it’s
no longer the sole domain of the super-rich. I like to compare the
difficulty of training an LLM to that of building a suspension
bridge—not trivial, but hundreds of countries around the world have
figured out how to do it. (Correction: Wikipedia’s Suspension bridges by
country category lists 44 countries).
You can run LLMs on your own devices
In January of this year, I thought it would be years before I could run
a useful LLM on my own computer. GPT-3 and 3.5 were pretty much the only
games in town, and I thought that even if the model weights were
available it would take a $10,000+ server to run them.
- >-
This prompt-driven custom interface feature is so powerful and easy to
build (once you’ve figured out the gnarly details of browser sandboxing)
that I expect it to show up as a feature in a wide range of products in
2025.
Universal access to the best models lasted for just a few short months
For a few short months this year all three of the best available
models—GPT-4o, Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely
available to most of the world.
- source_sentence: What is the new approach to scaling models mentioned in the context?
sentences:
- >-
So far, I think they’re a net positive. I’ve used them on a personal
level to improve my productivity (and entertain myself) in all sorts of
different ways. I think people who learn how to use them effectively can
gain a significant boost to their quality of life.
A lot of people are yet to be sold on their value! Some think their
negatives outweigh their positives, some think they are all hot air, and
some even think they represent an existential threat to humanity.
They’re actually quite easy to build
The most surprising thing we’ve learned about LLMs this year is that
they’re actually quite easy to build.
- >-
The biggest innovation here is that it opens up a new way to scale a
model: instead of improving model performance purely through additional
compute at training time, models can now take on harder problems by
spending more compute on inference.
The sequel to o1, o3 (they skipped “o2” for European trademark reasons)
was announced on 20th December with an impressive result against the
ARC-AGI benchmark, albeit one that likely involved more than $1,000,000
of compute time expense!
o3 is expected to ship in January. I doubt many people have real-world
problems that would benefit from that level of compute expenditure—I
certainly don’t!—but it appears to be a genuine next step in LLM
architecture for taking on much harder problems.
- >-
Language Models are gullible. They “believe” what we tell them—what’s in
their training data, then what’s in the fine-tuning data, then what’s in
the prompt.
In order to be useful tools for us, we need them to believe what we feed
them!
But it turns out a lot of the things we want to build need them not to
be gullible.
Everyone wants an AI personal assistant. If you hired a real-world
personal assistant who believed everything that anyone told them, you
would quickly find that their ability to positively impact your life was
severely limited.
- source_sentence: When was Anthropic’s Claude 3 series initially launched?
sentences:
- >-
Prompt injection is a natural consequence of this gulibility. I’ve seen
precious little progress on tackling that problem in 2024, and we’ve
been talking about it since September 2022.
I’m beginning to see the most popular idea of “agents” as dependent on
AGI itself. A model that’s robust against gulliblity is a very tall
order indeed.
Evals really matter
Anthropic’s Amanda Askell (responsible for much of the work behind
Claude’s Character):
- >-
A year ago, the only organization that had released a generally useful
LLM was OpenAI. We’ve now seen better-than-GPT-3 class models produced
by Anthropic, Mistral, Google, Meta, EleutherAI, Stability AI, TII in
Abu Dhabi (Falcon), Microsoft Research, xAI, Replit, Baidu and a bunch
of other organizations.
The training cost (hardware and electricity) is still
significant—initially millions of dollars, but that seems to have
dropped to the tens of thousands already. Microsoft’s Phi-2 claims to
have used “14 days on 96 A100 GPUs”, which works out at around $35,000
using current Lambda pricing.
- >-
Getting back to models that beat GPT-4: Anthropic’s Claude 3 series
launched in March, and Claude 3 Opus quickly became my new favourite
daily-driver. They upped the ante even more in June with the launch of
Claude 3.5 Sonnet—a model that is still my favourite six months later
(though it got a significant upgrade on October 22, confusingly keeping
the same 3.5 version number. Anthropic fans have since taken to calling
it Claude 3.6).
- source_sentence: >-
Why might fine-tuning an existing LLM be more accessible to hobbyists than
training one from scratch?
sentences:
- >-
I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly
great model) on my iPhone. You can install several different apps to get
your own, local, completely private LLM. My own LLM project provides a
CLI tool for running an array of different models via plugins.
You can even run them entirely in your browser using WebAssembly and the
latest Chrome!
Hobbyists can build their own fine-tuned models
I said earlier that building an LLM was still out of reach of hobbyists.
That may be true for training from scratch, but fine-tuning one of those
models is another matter entirely.
- >-
Intuitively, one would expect that systems this powerful would take
millions of lines of complex code. Instead, it turns out a few hundred
lines of Python is genuinely enough to train a basic version!
What matters most is the training data. You need a lot of data to make
these things work, and the quantity and quality of the training data
appears to be the most important factor in how good the resulting model
is.
If you can gather the right data, and afford to pay for the GPUs to
train it, you can build an LLM.
- >-
Nothing yet from Anthropic or Meta but I would be very surprised if they
don’t have their own inference-scaling models in the works. Meta
published a relevant paper Training Large Language Models to Reason in a
Continuous Latent Space in December.
Was the best currently available LLM trained in China for less than $6m?
Not quite, but almost! It does make for a great attention-grabbing
headline.
The big news to end the year was the release of DeepSeek v3—dropped on
Hugging Face on Christmas Day without so much as a README file, then
followed by documentation and a paper the day after that.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9692441461309548
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9583333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9583333333333334
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dwb2023/legal-ft-c53d04b6-ee03-4160-9525-a7af282c08e8")
# Run inference
sentences = [
'Why might fine-tuning an existing LLM be more accessible to hobbyists than training one from scratch?',
'I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model) on my iPhone. You can install several different apps to get your own, local, completely private LLM. My own LLM project provides a CLI tool for running an array of different models via plugins.\nYou can even run them entirely in your browser using WebAssembly and the latest Chrome!\nHobbyists can build their own fine-tuned models\nI said earlier that building an LLM was still out of reach of hobbyists. That may be true for training from scratch, but fine-tuning one of those models is another matter entirely.',
'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t have their own inference-scaling models in the works. Meta published a relevant paper Training Large Language Models to Reason in a Continuous Latent Space in December.\nWas the best currently available LLM trained in China for less than $6m?\nNot quite, but almost! It does make for a great attention-grabbing headline.\nThe big news to end the year was the release of DeepSeek v3—dropped on Hugging Face on Christmas Day without so much as a README file, then followed by documentation and a paper the day after that.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9167 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.9167 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.9167 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9692 |
cosine_mrr@10 | 0.9583 |
cosine_map@100 | 0.9583 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 156 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 156 samples:
sentence_0 sentence_1 type string string details - min: 12 tokens
- mean: 20.94 tokens
- max: 32 tokens
- min: 43 tokens
- mean: 135.14 tokens
- max: 214 tokens
- Samples:
sentence_0 sentence_1 When did Meta release the original Llama model?
Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.
I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!
This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.
Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.What was significant about the release of Llama 2 in July?
Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.
I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!
This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.
Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.What are some companies mentioned that have developed multi-modal audio models?
Your browser does not support the audio element.
OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025.
Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:
Your browser does not support the audio element. - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 16 | 0.9638 |
2.0 | 32 | 0.9638 |
3.0 | 48 | 0.9692 |
3.125 | 50 | 0.9692 |
4.0 | 64 | 0.9692 |
5.0 | 80 | 0.9539 |
6.0 | 96 | 0.9539 |
6.25 | 100 | 0.9539 |
7.0 | 112 | 0.9539 |
8.0 | 128 | 0.9539 |
9.0 | 144 | 0.9692 |
9.375 | 150 | 0.9692 |
10.0 | 160 | 0.9692 |
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}