File size: 5,147 Bytes
d6d9c7e af2a581 7d46fbd d6d9c7e bcadf49 d6d9c7e 9126afe 8f5ddf2 9126afe 8b15dba d01e658 d6d9c7e af2a581 d6d9c7e af2a581 b3d8ae9 af2a581 d6d9c7e af2a581 d6d9c7e af2a581 d6d9c7e af2a581 d6d9c7e e0d2d6d af2a581 e0d2d6d 17c3223 e0d2d6d 17c3223 e0d2d6d 17c3223 e0d2d6d d6d9c7e e0d2d6d af2a581 9126afe af2a581 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
---
base_model: HuggingFaceTB/SmolLM2-360M
library_name: transformers
model_name: SmolLM2-360M-tldr-sft-2025-02-12_15-13
tags:
- generated_from_trainer
- trl
- sft
license: mit
datasets:
- davanstrien/hub-tldr-dataset-summaries-llama
- davanstrien/hub-tldr-model-summaries-llama
---
# Smol-Hub-tldr
<div style="float: right; margin-left: 1em;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/60107b385ac3e86b3ea4fc34/dD9vx3VOPB0Tf6C_ZjJT2.png" alt="Model visualization" width="200"/>
</div>
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M). The model is focused on generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub. These summaries are intended to be used for:
- creating useful tl;dr descriptions that can give you a quick sense of what a dataset or model is for
- as input text for creating embeddings for semantic search. You can see a demo of this in [librarian-bots/huggingface-datasets-semantic-search](https://huggingface.co/spaces/librarian-bots/huggingface-datasets-semantic-search).
The model was trained using supervised fine-tuning (SFT) with [TRL](https://github.com/huggingface/trl).
A meta example of a summary generated for this card:
> This model is a fine-tuned version of SmolLM2-360M for generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub.
## Intended Use
The model is designed to generate brief, informative summaries of:
- Model cards: Focusing on key capabilities and characteristics
- Dataset cards: Capturing essential dataset characteristics and purposes
## Training Data
The model was trained on:
- Model card summaries generated by Llama 3.3 70B
- Dataset card summaries generated by Llama 3.3 70B
## Usage
Using the chat template when using the model in inference is recommended. Additionally, you should prepend either `<MODEL_CARD>` or `<DATASET_CARD>` to the start of the card you want to summarize. The training data used the body of the model or dataset card, i.e., the part after the YAML, so you will likely get better results only by passing this part of the card.
I have so far found that a low temperature of `0.4` generates better results.
Example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from huggingface_hub import ModelCard
card = ModelCard.load("davanstrien/Smol-Hub-tldr")
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("davanstrien/Smol-Hub-tldr")
model = AutoModelForCausalLM.from_pretrained("davanstrien/Smol-Hub-tldr")
# Format input according to the chat template
messages = [{"role": "user", "content": f"<MODEL_CARD>{card.text}"}]
# Encode with the chat template
inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
)
# Generate with stop tokens
outputs = model.generate(
inputs,
max_new_tokens=60,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
temperature=0.4,
do_sample=True,
)
input_length = inputs.shape[1]
response = tokenizer.decode(outputs[0][input_length:], skip_special_tokens=False)
# Extract just the summary part
summary = response.split("<CARD_SUMMARY>")[-1].split("</CARD_SUMMARY>")[0]
print(summary)
>>> "The Smol-Hub-tldr model is a fine-tuned version of SmolLM2-360M designed to generate concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub."
```
The model currently should close its summary with a `</CARD_SUMMARY>` (cooking some more with this...), so you can also use this as a stopping criterion when using `pipeline` inference.
```python
from transformers import pipeline, StoppingCriteria, StoppingCriteriaList
import torch
class StopOnTokens(StoppingCriteria):
def __init__(self, tokenizer, stop_token_ids):
self.stop_token_ids = stop_token_ids
self.tokenizer = tokenizer
def __call__(
self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs
) -> bool:
for stop_id in self.stop_token_ids:
if input_ids[0][-1] == stop_id:
return True
return False
# Initialize pipeline
pipe = pipeline("text-generation", "davanstrien/Smol-Hub-tldr")
tokenizer = pipe.tokenizer
# Get the token IDs for stopping
stop_token_ids = [
tokenizer.encode("</CARD_SUMMARY>", add_special_tokens=True)[-1],
tokenizer.eos_token_id,
]
# Create stopping criteria
stopping_criteria = StoppingCriteriaList([StopOnTokens(tokenizer, stop_token_ids)])
# Generate with stopping criteria
response = pipe(
messages,
max_new_tokens=50,
do_sample=True,
temperature=0.7,
stopping_criteria=stopping_criteria,
return_full_text=False,
)
# Clean up the response
summary = response[0]["generated_text"]
print(summary)
>>> "This model is a fine-tuned version of SmolLM2-360M for generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub."
```
## Framework Versions
- TRL 0.14.0
- Transformers 4.48.3
- PyTorch 2.6.0
- Datasets 3.2.0
- Tokenizers 0.21.0 |