Commit
ยท
5c4f2fd
1
Parent(s):
acf6917
Update README.md to enhance model description and add advanced example for ArXiv ML trends analysis
Browse files
README.md
CHANGED
|
@@ -32,6 +32,7 @@ That's it! No installation, no setup - just `uv run`.
|
|
| 32 |
- **Guaranteed valid outputs** using vLLM's guided decoding with outlines
|
| 33 |
- **Zero-shot classification** with structured generation
|
| 34 |
- **GPU-optimized** with vLLM's automatic batching for maximum efficiency
|
|
|
|
| 35 |
- **Robust text handling** with preprocessing and validation
|
| 36 |
- **Three prompt styles** for different use cases
|
| 37 |
- **Automatic progress tracking** and detailed statistics
|
|
@@ -52,13 +53,15 @@ uv run classify-dataset.py \
|
|
| 52 |
### Arguments
|
| 53 |
|
| 54 |
**Required:**
|
|
|
|
| 55 |
- `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`)
|
| 56 |
- `--column`: Name of the text column to classify
|
| 57 |
- `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`)
|
| 58 |
- `--output-dataset`: Where to save the classified dataset
|
| 59 |
|
| 60 |
**Optional:**
|
| 61 |
-
|
|
|
|
| 62 |
- `--prompt-style`: Choose from `simple`, `detailed`, or `reasoning` (default: `simple`)
|
| 63 |
- `--split`: Dataset split to process (default: `train`)
|
| 64 |
- `--max-samples`: Limit samples for testing
|
|
@@ -77,6 +80,7 @@ All styles benefit from structured output guarantees - the model can only output
|
|
| 77 |
## ๐ Examples
|
| 78 |
|
| 79 |
### Sentiment Analysis
|
|
|
|
| 80 |
```bash
|
| 81 |
uv run classify-dataset.py \
|
| 82 |
--input-dataset stanfordnlp/imdb \
|
|
@@ -86,6 +90,7 @@ uv run classify-dataset.py \
|
|
| 86 |
```
|
| 87 |
|
| 88 |
### Support Ticket Classification
|
|
|
|
| 89 |
```bash
|
| 90 |
uv run classify-dataset.py \
|
| 91 |
--input-dataset user/support-tickets \
|
|
@@ -96,6 +101,7 @@ uv run classify-dataset.py \
|
|
| 96 |
```
|
| 97 |
|
| 98 |
### News Categorization
|
|
|
|
| 99 |
```bash
|
| 100 |
uv run classify-dataset.py \
|
| 101 |
--input-dataset ag_news \
|
|
@@ -109,30 +115,17 @@ uv run classify-dataset.py \
|
|
| 109 |
|
| 110 |
This script is optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization):
|
| 111 |
|
| 112 |
-
|
| 113 |
# Run on L4 GPU with vLLM image
|
| 114 |
hf jobs uv run \
|
| 115 |
--flavor l4x1 \
|
| 116 |
--image vllm/vllm-openai:latest \
|
| 117 |
-
classify-dataset.py \
|
| 118 |
--input-dataset stanfordnlp/imdb \
|
| 119 |
--column text \
|
| 120 |
--labels "positive,negative" \
|
| 121 |
--output-dataset user/imdb-classified
|
| 122 |
|
| 123 |
-
# Run on A10 GPU with custom model
|
| 124 |
-
hf jobs uv run \
|
| 125 |
-
--flavor a10g-large \
|
| 126 |
-
--image vllm/vllm-openai:latest \
|
| 127 |
-
classify-dataset.py \
|
| 128 |
-
--input-dataset user/reviews \
|
| 129 |
-
--column review_text \
|
| 130 |
-
--labels "1,2,3,4,5" \
|
| 131 |
-
--output-dataset user/reviews-rated \
|
| 132 |
-
--model mistralai/Mistral-7B-Instruct-v0.3 \
|
| 133 |
-
--prompt-style detailed
|
| 134 |
-
```
|
| 135 |
-
|
| 136 |
### GPU Flavors
|
| 137 |
- `t4-small`: Budget option for smaller models
|
| 138 |
- `l4x1`: Good balance for 7B models
|
|
@@ -144,7 +137,7 @@ hf jobs uv run \
|
|
| 144 |
|
| 145 |
### Using Different Models
|
| 146 |
|
| 147 |
-
|
| 148 |
|
| 149 |
```bash
|
| 150 |
# Larger model for complex classification
|
|
@@ -154,7 +147,7 @@ uv run classify-dataset.py \
|
|
| 154 |
--labels "contract,patent,brief,memo,other" \
|
| 155 |
--output-dataset user/legal-classified \
|
| 156 |
--model Qwen/Qwen2.5-7B-Instruct
|
| 157 |
-
|
| 158 |
|
| 159 |
### Large Datasets
|
| 160 |
|
|
@@ -170,7 +163,7 @@ uv run classify-dataset.py \
|
|
| 170 |
|
| 171 |
## ๐ Performance
|
| 172 |
|
| 173 |
-
- **SmolLM3-3B**: ~50-100 texts/second on A10
|
| 174 |
- **7B models**: ~20-50 texts/second on A10
|
| 175 |
- vLLM automatically optimizes batching for best throughput
|
| 176 |
|
|
@@ -186,32 +179,76 @@ The script loads your dataset, preprocesses texts, classifies each one using gui
|
|
| 186 |
## ๐ Troubleshooting
|
| 187 |
|
| 188 |
### CUDA Not Available
|
|
|
|
| 189 |
This script requires a GPU. Run it on:
|
|
|
|
| 190 |
- A machine with NVIDIA GPU
|
| 191 |
- HF Jobs (recommended)
|
| 192 |
- Cloud GPU instances
|
| 193 |
|
| 194 |
### Out of Memory
|
|
|
|
| 195 |
- Use a smaller model
|
| 196 |
- Use a larger GPU (e.g., a100-large)
|
| 197 |
|
| 198 |
### Invalid/Skipped Texts
|
|
|
|
| 199 |
- Texts shorter than 3 characters are skipped
|
| 200 |
- Empty or None values are marked as invalid
|
| 201 |
- Very long texts are truncated to 4000 characters
|
| 202 |
|
| 203 |
### Classification Quality
|
|
|
|
| 204 |
- With guided decoding, outputs are guaranteed to be valid labels
|
| 205 |
- For better results, use clear and distinct label names
|
| 206 |
- Try the `reasoning` prompt style for complex classifications
|
| 207 |
- Use a larger model for nuanced tasks
|
| 208 |
|
| 209 |
### vLLM Version Issues
|
|
|
|
| 210 |
If you see `ImportError: cannot import name 'GuidedDecodingParams'`:
|
|
|
|
| 211 |
- Your vLLM version is too old (requires >= 0.6.6)
|
| 212 |
- The script specifies the correct version in its dependencies
|
| 213 |
- UV should automatically install the correct version
|
| 214 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 215 |
## ๐ License
|
| 216 |
|
| 217 |
-
This script is provided as-is for use with the UV Scripts organization.
|
|
|
|
| 32 |
- **Guaranteed valid outputs** using vLLM's guided decoding with outlines
|
| 33 |
- **Zero-shot classification** with structured generation
|
| 34 |
- **GPU-optimized** with vLLM's automatic batching for maximum efficiency
|
| 35 |
+
- **Default model**: HuggingFaceTB/SmolLM3-3B (fast 3B model, easily changeable)
|
| 36 |
- **Robust text handling** with preprocessing and validation
|
| 37 |
- **Three prompt styles** for different use cases
|
| 38 |
- **Automatic progress tracking** and detailed statistics
|
|
|
|
| 53 |
### Arguments
|
| 54 |
|
| 55 |
**Required:**
|
| 56 |
+
|
| 57 |
- `--input-dataset`: Hugging Face dataset ID (e.g., `stanfordnlp/imdb`, `user/my-dataset`)
|
| 58 |
- `--column`: Name of the text column to classify
|
| 59 |
- `--labels`: Comma-separated classification labels (e.g., `"spam,ham"`)
|
| 60 |
- `--output-dataset`: Where to save the classified dataset
|
| 61 |
|
| 62 |
**Optional:**
|
| 63 |
+
|
| 64 |
+
- `--model`: Model to use (default: **`HuggingFaceTB/SmolLM3-3B`** - a fast 3B parameter model)
|
| 65 |
- `--prompt-style`: Choose from `simple`, `detailed`, or `reasoning` (default: `simple`)
|
| 66 |
- `--split`: Dataset split to process (default: `train`)
|
| 67 |
- `--max-samples`: Limit samples for testing
|
|
|
|
| 80 |
## ๐ Examples
|
| 81 |
|
| 82 |
### Sentiment Analysis
|
| 83 |
+
|
| 84 |
```bash
|
| 85 |
uv run classify-dataset.py \
|
| 86 |
--input-dataset stanfordnlp/imdb \
|
|
|
|
| 90 |
```
|
| 91 |
|
| 92 |
### Support Ticket Classification
|
| 93 |
+
|
| 94 |
```bash
|
| 95 |
uv run classify-dataset.py \
|
| 96 |
--input-dataset user/support-tickets \
|
|
|
|
| 101 |
```
|
| 102 |
|
| 103 |
### News Categorization
|
| 104 |
+
|
| 105 |
```bash
|
| 106 |
uv run classify-dataset.py \
|
| 107 |
--input-dataset ag_news \
|
|
|
|
| 115 |
|
| 116 |
This script is optimized for [Hugging Face Jobs](https://huggingface.co/docs/hub/spaces-gpu-jobs) (requires Pro subscription or Team/Enterprise organization):
|
| 117 |
|
| 118 |
+
````bash
|
| 119 |
# Run on L4 GPU with vLLM image
|
| 120 |
hf jobs uv run \
|
| 121 |
--flavor l4x1 \
|
| 122 |
--image vllm/vllm-openai:latest \
|
| 123 |
+
https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \
|
| 124 |
--input-dataset stanfordnlp/imdb \
|
| 125 |
--column text \
|
| 126 |
--labels "positive,negative" \
|
| 127 |
--output-dataset user/imdb-classified
|
| 128 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 129 |
### GPU Flavors
|
| 130 |
- `t4-small`: Budget option for smaller models
|
| 131 |
- `l4x1`: Good balance for 7B models
|
|
|
|
| 137 |
|
| 138 |
### Using Different Models
|
| 139 |
|
| 140 |
+
By default, this script uses **HuggingFaceTB/SmolLM3-3B** - a fast, efficient 3B parameter model that's perfect for most classification tasks. You can easily use any other instruction-tuned model:
|
| 141 |
|
| 142 |
```bash
|
| 143 |
# Larger model for complex classification
|
|
|
|
| 147 |
--labels "contract,patent,brief,memo,other" \
|
| 148 |
--output-dataset user/legal-classified \
|
| 149 |
--model Qwen/Qwen2.5-7B-Instruct
|
| 150 |
+
````
|
| 151 |
|
| 152 |
### Large Datasets
|
| 153 |
|
|
|
|
| 163 |
|
| 164 |
## ๐ Performance
|
| 165 |
|
| 166 |
+
- **SmolLM3-3B (default)**: ~50-100 texts/second on A10
|
| 167 |
- **7B models**: ~20-50 texts/second on A10
|
| 168 |
- vLLM automatically optimizes batching for best throughput
|
| 169 |
|
|
|
|
| 179 |
## ๐ Troubleshooting
|
| 180 |
|
| 181 |
### CUDA Not Available
|
| 182 |
+
|
| 183 |
This script requires a GPU. Run it on:
|
| 184 |
+
|
| 185 |
- A machine with NVIDIA GPU
|
| 186 |
- HF Jobs (recommended)
|
| 187 |
- Cloud GPU instances
|
| 188 |
|
| 189 |
### Out of Memory
|
| 190 |
+
|
| 191 |
- Use a smaller model
|
| 192 |
- Use a larger GPU (e.g., a100-large)
|
| 193 |
|
| 194 |
### Invalid/Skipped Texts
|
| 195 |
+
|
| 196 |
- Texts shorter than 3 characters are skipped
|
| 197 |
- Empty or None values are marked as invalid
|
| 198 |
- Very long texts are truncated to 4000 characters
|
| 199 |
|
| 200 |
### Classification Quality
|
| 201 |
+
|
| 202 |
- With guided decoding, outputs are guaranteed to be valid labels
|
| 203 |
- For better results, use clear and distinct label names
|
| 204 |
- Try the `reasoning` prompt style for complex classifications
|
| 205 |
- Use a larger model for nuanced tasks
|
| 206 |
|
| 207 |
### vLLM Version Issues
|
| 208 |
+
|
| 209 |
If you see `ImportError: cannot import name 'GuidedDecodingParams'`:
|
| 210 |
+
|
| 211 |
- Your vLLM version is too old (requires >= 0.6.6)
|
| 212 |
- The script specifies the correct version in its dependencies
|
| 213 |
- UV should automatically install the correct version
|
| 214 |
|
| 215 |
+
## ๐ฌ Advanced Example: ArXiv ML Trends Analysis
|
| 216 |
+
|
| 217 |
+
For a more complex real-world example, we provide scripts to analyze ML research trends from ArXiv papers:
|
| 218 |
+
|
| 219 |
+
### Step 1: Prepare the Dataset
|
| 220 |
+
|
| 221 |
+
```bash
|
| 222 |
+
# Filter and prepare ArXiv CS papers from 2024
|
| 223 |
+
uv run prepare_arxiv_2024.py
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
This creates a filtered dataset of CS papers with combined title+abstract text.
|
| 227 |
+
|
| 228 |
+
### Step 2: Run Classification with Python API
|
| 229 |
+
|
| 230 |
+
```bash
|
| 231 |
+
# Use HF Jobs Python API to classify papers
|
| 232 |
+
uv run run_arxiv_classification.py
|
| 233 |
+
```
|
| 234 |
+
|
| 235 |
+
This script demonstrates:
|
| 236 |
+
|
| 237 |
+
- Using `run_uv_job()` from the Python API
|
| 238 |
+
- Classifying into modern ML trends (reasoning, agents, multimodal, robotics, etc.)
|
| 239 |
+
- Handling authentication and job monitoring
|
| 240 |
+
|
| 241 |
+
The classification categories include:
|
| 242 |
+
|
| 243 |
+
- `reasoning_systems`: Chain-of-thought, reasoning, problem solving
|
| 244 |
+
- `agents_autonomous`: Agents, tool use, autonomous systems
|
| 245 |
+
- `multimodal_models`: Vision-language, audio, multi-modal
|
| 246 |
+
- `robotics_embodied`: Robotics, embodied AI, manipulation
|
| 247 |
+
- `efficient_inference`: Quantization, distillation, edge deployment
|
| 248 |
+
- `alignment_safety`: RLHF, alignment, safety, interpretability
|
| 249 |
+
- `generative_models`: Diffusion, generation, synthesis
|
| 250 |
+
- `foundational_other`: Other foundational ML/AI research
|
| 251 |
+
|
| 252 |
## ๐ License
|
| 253 |
|
| 254 |
+
This script is provided as-is for use with the UV Scripts organization.
|