modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-08 12:29:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-08 12:28:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
vaibhavbasidoni/gemma-3-finetuneiamge-4b
|
vaibhavbasidoni
| 2025-04-26T05:30:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T03:31:20Z |
---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** vaibhavbasidoni
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nitrals-Loras/vmc-12B-1.5-lora
|
Nitrals-Loras
| 2025-04-26T05:05:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Nitral-AI/vmc-12B-1.25",
"base_model:adapter:Nitral-AI/vmc-12B-1.25",
"region:us"
] | null | 2025-04-26T05:05:44Z |
---
base_model: Nitral-AI/vmc-12B-1.25
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
Otakadelic/Meta-Llama-3.1-8B-Instruct-abliterated-Q6_K-GGUF
|
Otakadelic
| 2025-04-26T05:00:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:Georgsius/Meta-Llama-3.1-8B-Instruct-abliterated",
"base_model:quantized:Georgsius/Meta-Llama-3.1-8B-Instruct-abliterated",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T04:59:35Z |
---
base_model: Georgsius/Meta-Llama-3.1-8B-Instruct-abliterated
library_name: transformers
license: llama3.1
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Otakadelic/Meta-Llama-3.1-8B-Instruct-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`Georgsius/Meta-Llama-3.1-8B-Instruct-abliterated`](https://huggingface.co/Georgsius/Meta-Llama-3.1-8B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Georgsius/Meta-Llama-3.1-8B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Otakadelic/Meta-Llama-3.1-8B-Instruct-abliterated-Q6_K-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Otakadelic/Meta-Llama-3.1-8B-Instruct-abliterated-Q6_K-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Otakadelic/Meta-Llama-3.1-8B-Instruct-abliterated-Q6_K-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Otakadelic/Meta-Llama-3.1-8B-Instruct-abliterated-Q6_K-GGUF --hf-file meta-llama-3.1-8b-instruct-abliterated-q6_k.gguf -c 2048
```
|
earningrewardscrypto/earn
|
earningrewardscrypto
| 2025-04-26T04:43:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T04:43:22Z |
---
license: apache-2.0
---
|
newtaker3475/newtaker1121006
|
newtaker3475
| 2025-04-26T02:42:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T02:42:57Z |
---
license: apache-2.0
---
|
Petercusin/English-news-category-classifier
|
Petercusin
| 2025-04-26T01:51:18Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"code",
"text-classification",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-04-26T00:24:54Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
tags:
- code
Eval Results: {'eval_loss': 1.6844203472137451, 'eval_accuracy': 0.5371031746031746, 'eval_f1': 0.5281888823201883, 'eval_precision': 0.5347082372987961, 'eval_recall': 0.5371031746031746, 'eval_runtime': 584.5829, 'eval_samples_per_second': 8.622, 'eval_steps_per_second': 0.539, 'epoch': 2.0}
---
## 1. Model Details
| **Attribute** | **Value** |
|-------------------------------|-----------------------------|
| Developed by | Petercusin (Guisheng Pan) |
| Model Architecture | DistilBERT |
| Activation Function | GELU |
| Dimensions | 768 |
| Size | 255M |
| Hidden Dimensions | 3072 |
| Attention Dropout | 0.1 |
| Dropout | 0.1 |
| Sequence Classification Dropout | 0.2 |
| Number of Heads | 12 |
| Number of Layers | 6 |
| Max Position Embeddings | 512 |
| Vocabulary Size | 30522 |
| Initializer Range | 0.02 |
| Tied Weights | True |
| Problem Type | Multi-Label Classification |
## 2. Model Description
This model is designed to classify English news articles into various domains or categories. It can be used for tasks such as news categorization, content organization, and topic-based filtering.
## ⚙️3. How to Get Started with the Model
```python
# -*- coding: utf-8 -*-
"""
Created on Sat Apr 26 08:48:07 2025
@author: Petercusin
"""
import torch
import torch.nn.functional as F
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification
# Step 1: Load the trained model and tokenizer
tokenizer = DistilBertTokenizer.from_pretrained("English-news-category-classifier")
model = DistilBertForSequenceClassification.from_pretrained("English-news-category-classifier")
# Step 2: Define a function to preprocess the input text
def preprocess_text(text):
inputs = tokenizer(text, padding='max_length', truncation=True, return_tensors='pt')
return inputs
# Step 3: Define a function to make predictions
def predict(text):
# Preprocess the input text
inputs = preprocess_text(text)
# Make predictions
with torch.no_grad():
outputs = model(**inputs)
# Get the predicted class probabilities
logits = outputs.logits
probabilities = F.softmax(logits, dim=1).squeeze().tolist()
predicted_class_id = torch.argmax(logits, dim=1).item()
return predicted_class_id, probabilities
# Step 4: Load the label map from the model's configuration
label_map = model.config.id2label
# Example usage
new_titles = [
"Stock markets reach all-time high amid economic recovery",
"Scientists discover new species in Amazon rainforest",
"Congress passes new bill on healthcare reforms",
"The stairway to love: Chongqing's real-life fairy tale",
"African delegation take in Shanghai sights on Huangpu cruise",
"China expected to achieve higher grain output in 2025: report",
"China continued its dominance at the 2025 World Aquatics Diving World Cup in Guadalajara, sweeping all four gold medals on the third day of competitions on Saturday, along with one silver.",
"A 'DeepSeek moment for AI agents' as China launches Manus",
"Developed by Monica.im, Manus achieved top scores on the GAIA (General AI Assistant) benchmark, exceeding those of OpenAI's GPT (generative pre-trained transformer) tools. GAIA is a real-world benchmark for general AI assistants.",
"This week and without warning, a horrid video popped up on my phone. A puppy had its mouth and paws bound with tape, and was hanging in a plastic bag by the motorway. I immediately flicked past, but the image stayed with me. This was something I didn’t want to see, yet there it was at 11am on a Tuesday."
]
for v in new_titles:
input_text=v
predicted_class_id, probabilities = predict(input_text)
predicted_category = label_map[predicted_class_id]
print(f"Predicted category: {predicted_category}")
print(f"Text to classify: {v}")
predicted_probability = probabilities[predicted_class_id]
print(f"Probability of the predicted category: {predicted_probability:.4f}\n")
```
## Result
```json
Predicted category: BUSINESS
Text to classify: Stock markets reach all-time high amid economic recovery
Probability of the predicted category: 0.5707
Predicted category: SCIENCE
Text to classify: Scientists discover new species in Amazon rainforest
Probability of the predicted category: 0.5186
Predicted category: POLITICS
Text to classify: Congress passes new bill on healthcare reforms
Probability of the predicted category: 0.6175
Predicted category: ARTS
Text to classify: The stairway to love: Chongqing's real-life fairy tale
Probability of the predicted category: 0.2746
Predicted category: WORLDPOST
Text to classify: African delegation take in Shanghai sights on Huangpu cruise
Probability of the predicted category: 0.4686
Predicted category: GREEN
Text to classify: China expected to achieve higher grain output in 2025: report
Probability of the predicted category: 0.2889
Predicted category: SPORTS
Text to classify: China continued its dominance at the 2025 World Aquatics Diving World Cup in Guadalajara, sweeping all four gold medals on the third day of competitions on Saturday, along with one silver.
Probability of the predicted category: 0.4540
Predicted category: TECH
Text to classify: A 'DeepSeek moment for AI agents' as China launches Manus
Probability of the predicted category: 0.3297
Predicted category: TECH
Text to classify: Developed by Monica.im, Manus achieved top scores on the GAIA (General AI Assistant) benchmark, exceeding those of OpenAI's GPT (generative pre-trained transformer) tools. GAIA is a real-world benchmark for general AI assistants.
Probability of the predicted category: 0.8065
Predicted category: GOOD NEWS
Text to classify: This week and without warning, a horrid video popped up on my phone. A puppy had its mouth and paws bound with tape, and was hanging in a plastic bag by the motorway. I immediately flicked past, but the image stayed with me. This was something I didn’t want to see, yet there it was at 11am on a Tuesday.
Probability of the predicted category: 0.1350
```
## 4. Training Data
The model was trained on a dataset of news articles categorized into 42 different domains. The categories include:
| **Column 1** | **Column 2** |
|--------------|--------------|
| 0 LATINO VOICES | 21 WORLD NEWS |
| 1 ARTS | 22 QUEER VOICES |
| 2 CULTURE & ARTS | 23 PARENTING |
| 3 HOME & LIVING | 24 MONEY |
| 4 ARTS & CULTURE | 25 SPORTS |
| 5 THE WORLDPOST | 26 POLITICS |
| 6 GOOD NEWS | 27 WELLNESS |
| 7 FIFTY | 28 GREEN |
| 8 CRIME | 29 BUSINESS |
| 9 RELIGION | 30 TECH |
| 10 PARENTS | 31 ENVIRONMENT |
| 11 TASTE | 32 WOMEN |
| 12 WORLDPOST | 33 U.S. NEWS |
| 13 EDUCATION | 34 HEALTHY LIVING |
| 14 ENTERTAINMENT | 35 DIVORCE |
| 15 FOOD & DRINK | 36 MEDIA |
| 16 TRAVEL | 37 WEDDINGS |
| 17 STYLE & BEAUTY | 38 BLACK VOICES |
| 18 IMPACT | 39 STYLE |
| 19 WEIRD NEWS | 40 COMEDY |
| 20 COLLEGE | 41 SCIENCE |
## 5. Evaluation
- The model was evaluated on a test set, and the following metrics were obtained:
- Evaluation Loss: 1.6844
- Evaluation Accuracy: 0.5371
- Evaluation F1 Score: 0.5282
- Evaluation Precision: 0.5347
- Evaluation Recall: 0.5371
- Evaluation Runtime: 584.58 seconds
- Evaluation Samples per Second: 8.622
- Evaluation Steps per Second: 0.539
## 🤝 6. Model Card Contact
Author: Pan Guisheng, a PhD student at the Graduate Institute of Interpretation and Translation of Shanghai International Studies University Email: [email protected]
|
hpieris/VibeLlama-11b-seed-123
|
hpieris
| 2025-04-26T00:37:54Z | 0 | 0 | null |
[
"safetensors",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct",
"license:mit",
"region:us"
] |
text-generation
| 2025-04-26T00:37:47Z |
---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
tags:
- text-generation
license: mit
---
This LoRA adapter was fine-tuned from `meta-llama/Llama-3.2-11B-Vision-Instruct` on the IMDb dataset using QLoRA.
|
kk-aivio/3a4f1757-da46-41b1-8d5f-d6bb2f9f5a3e
|
kk-aivio
| 2025-04-25T23:01:57Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-04-25T23:01:16Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Phi-3-mini-4k-instruct
model-index:
- name: kk-aivio/3a4f1757-da46-41b1-8d5f-d6bb2f9f5a3e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/3a4f1757-da46-41b1-8d5f-d6bb2f9f5a3e
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
dzanbek/6fef2d0d-3d6c-4833-9eca-2347783ffbd9
|
dzanbek
| 2025-04-25T22:47:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2025-04-25T22:40:26Z |
---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6fef2d0d-3d6c-4833-9eca-2347783ffbd9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 92b2589f074b137f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/92b2589f074b137f_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/6fef2d0d-3d6c-4833-9eca-2347783ffbd9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/92b2589f074b137f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ef3f86ff-e1c3-4627-acac-22d43236fd1d
wandb_project: s56-2
wandb_run: your_name
wandb_runid: ef3f86ff-e1c3-4627-acac-22d43236fd1d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6fef2d0d-3d6c-4833-9eca-2347783ffbd9
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.9912 | 0.1301 | 200 | 4.2478 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mpasila/shisa-v2-JP-EN-Translator-v0.1-12B
|
mpasila
| 2025-04-25T20:59:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:NilanE/ParallelFiction-Ja_En-100k",
"dataset:mpasila/ja_en_massive_1000_sharegpt_filtered_fixed_short",
"base_model:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:finetune:shisa-ai/shisa-v2-mistral-nemo-12b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T19:51:43Z |
---
base_model: shisa-ai/shisa-v2-mistral-nemo-12b
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
datasets:
- NilanE/ParallelFiction-Ja_En-100k
- mpasila/ja_en_massive_1000_sharegpt_filtered_fixed_short
---
This only contains around 191 examples. This is just quick test will release the full around 1k examples soon.
I've done a quick cleaning of the data manually using Notepad++. There may still be broken stuff or other problems.
Uses ChatML, recommended system prompt: `You are an AI assistant that translates Japanese to English accurately.`
Uses [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k) for the data.
LoRA: [mpasila/shisa-v2-JP-EN-Translator-v0.1-LoRA-12B](https://huggingface.co/mpasila/shisa-v2-JP-EN-Translator-v0.1-LoRA-12B)
Uses the usual 128 rank and 32 alpha. Trained on 16384 context window in QLoRA.
**Token Count Statistics:**
- Total conversations: 191
- Total tokens: 918486
- Average tokens per conversation: 4808.83
- Median tokens per conversation: 4187.0
- Maximum tokens in a conversation: 13431
- Minimum tokens in a conversation: 512
**Token Distribution by Role:**
- System messages: 2483 tokens (0.27%)
- Human messages: 494038 tokens (53.79%)
- Assistant messages: 421965 tokens (45.94%)
**Token Count Distribution:**
- 0-512: 0 conversations (0.00%)
- 513-1024: 4 conversations (2.09%)
- 1025-2048: 10 conversations (5.24%)
- 2049-4096: 77 conversations (40.31%)
- 4097-8192: 83 conversations (43.46%)
- 8193-16384: 17 conversations (8.90%)
- 16385+: 0 conversations (0.00%)
# Uploaded shisa-v2-JP-EN-Translator-v0.1-12B model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** shisa-ai/shisa-v2-mistral-nemo-12b
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sang-Buster/atc-llama
|
Sang-Buster
| 2025-04-25T20:37:17Z | 6 | 0 | null |
[
"safetensors",
"llama",
"Speech Recognition",
"ATC",
"Unsloth",
"LoRA-Merged",
"text-generation",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] |
text-generation
| 2025-04-24T21:00:23Z |
---
license: llama3.2
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
tags:
- Speech Recognition
- ATC
- Unsloth
- LoRA-Merged
---
# ATC Communication Expert Model (Merged)
A fine-tuned model specialized in improving and analyzing Air Traffic Control (ATC) communications, with LoRA adapters merged into the base model.
## Model Details
### Model Description
This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct with merged LoRA adapters, optimized for processing Air Traffic Control communications. It can:
- Improve raw ATC transcripts with proper punctuation and formatting
- Identify communication intentions (pilot requests, ATC instructions, etc.)
- Extract key information such as flight numbers, altitudes, headings, and other numerical data
- Analyze speaker roles and communication patterns
The model was created by merging LoRA adapters (fine-tuned on ATC communications) into the Llama 3B base model, creating a unified model optimized for this specialized domain.
- **Developed by:** [Sang-Buster](https://github.com/Sang-Buster)
- **Model type:** Llama 3B with merged LoRA adapters
- **Language(s):** English, specialized for ATC terminology
- **License:** Same as the base model
- **Finetuned from model:** meta-llama/Llama-3.2-3B-Instruct
## Uses
### Direct Use
This model is intended for:
- Transcribing and formatting raw ATC communications
- Training ATC communication skills
- Analyzing ATC communication patterns
- Extracting structured data from ATC communications
- Educational purposes for those learning ATC communication protocols
### Downstream Use
The model can be integrated into:
- Air traffic management training systems
- Communication analysis tools
- ATC transcript post-processing pipelines
- Aviation safety monitoring systems
- Radio communication enhancement systems
### Out-of-Scope Use
This model is not suitable for:
- Real-time ATC operations or safety-critical decision-making
- Full language translation (it's specialized for ATC terminology only)
- General language processing outside the ATC domain
- Any application where model errors could impact flight safety
## Bias, Risks, and Limitations
- The model is specialized for ATC communications and may not perform well on general text
- It may have limitations with accents or non-standard ATC phraseology
- Performance depends on audio transcription quality for real-world applications
- Not intended for safety-critical applications without human verification
- May have biases based on the training data distribution
### Recommendations
- Always have human verification for safety-critical applications
- Use in conjunction with standard ATC protocols, not as a replacement
- Provide clear domain context for optimal performance
- Test thoroughly with diverse ATC communications before deployment
- Consider fine-tuning further on your specific ATC subdomain if needed
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"atc_llama_merged",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("atc_llama_merged")
# Process an ATC message
instruction = "As an ATC communication expert, improve this transcript and analyze its intentions and data."
message = "southwest five niner two turn left heading three four zero descend and maintain flight level two five zero"
prompt = f"<|begin_of_text|><|header_start|>user<|header_end|>\n\n{instruction}\n\nOriginal: {message}<|eot|><|header_start|>assistant<|header_end|>\n\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate improved transcript and analysis
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
response = tokenizer.decode(outputs[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(response)
```
## Model Creation Process
### Base Model and Adapters
- **Base model:** meta-llama/Llama-3.2-3B-Instruct
- **Adapter source:** LoRA adapters fine-tuned on ATC communications data
- **Merge method:** PEFT adapter merging into base model weights
### Merging Procedure
The model creation involved:
1. Loading the base Llama 3B model
2. Loading LoRA adapters fine-tuned on ATC communications data
3. Merging the adapters into the base model's weights
4. Saving the resulting unified model
## Evaluation
### Testing
The model should be tested on diverse ATC communications, including:
- Clearances and instructions
- Pilot requests and reports
- Emergency communications
- Different accents and speaking patterns
## Technical Specifications
### Model Architecture and Objective
- **Base architecture:** meta-llama/Llama-3.2-3B-Instruct
- **Adaptation method:** LoRA adapters merged into base weights
- **Training objective:** Improving and analyzing ATC communications
### Model Card Contact
For issues or questions about this model, please open a discussion in the repository.
|
ykarout/phi4-deepseek-r1-distilled-gguf-v5
|
ykarout
| 2025-04-25T20:08:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"phi-4",
"deepseek",
"r1",
"reasoning",
"code",
"math",
"science",
"unsloth",
"text-generation",
"en",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-25T15:24:42Z |
---
license: mit
datasets:
- nvidia/Llama-Nemotron-Post-Training-Dataset
language:
- en
base_model:
- microsoft/phi-4
- unsloth/phi-4
pipeline_tag: text-generation
library_name: transformers
tags:
- phi-4
- deepseek
- r1
- reasoning
- code
- math
- science
- unsloth
---
# Model Card for Model ID
Phi-4 unsloth model trained to generate deepseek-r1 styled reasoning based on a system prompt "detailed thinking on"
## Model Details
### Model Description
This fine-tuned model generated ehanced chain-of-thoughts, reasoning and produce "Aha-moments" akin to deepseek whenever the system prompt is set to "detailed thinking on".
Test any questions from trending datasets about code, math and science, with the system prompt set and unset and you can clearly see the difference in the generated output.
A Modelfile is included with the gguf files that can be used to load the model into Ollama. You have to set the system prompt manually after loading the model in Ollama since by
default there is no system prompt. You can use /set SYSTEM "detailed thinking on" and then input your prompt. The Modelfile includes optimal parameters but you can experimment
with different set of parameters based on your desired goal/output.
## Uses
Tasks requiring reasoning, chain-of-thoughts, several approaches etc...
### Recommendations
Use the parameters in the Modefile and set system prompt to "detailed thinking on" whenever you require long reasoning outputs. Set system parameter off when you want a direct
to the point quick answer without reasoning chains.
It is important to utilize the chat-template embedded in the Modelfile to ensure optimal generations and avoid endless generations or loops.
## How to Get Started with the Model
Download the gguf file and Modelfile into the same folder then use ollama create phi4-deepseek -f Modelfile. Then run the model using ollama run. Set the system parameter
Finally, start prompting.
## Training Details
### Training Data
Nvidia datasets containing reasoning context produced by DeeepSeek-R1.
### Training Procedure
Unsloth SFT Trainer
|
jdchang/full-dataset-bs-1024-lr-3e-4-sg-2-step-1458
|
jdchang
| 2025-04-25T18:23:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T18:23:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dagarcsot/yolo_finetuned_fruits
|
dagarcsot
| 2025-04-25T18:15:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"yolos",
"object-detection",
"generated_from_trainer",
"base_model:hustvl/yolos-tiny",
"base_model:finetune:hustvl/yolos-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-04-25T17:59:42Z |
---
library_name: transformers
license: apache-2.0
base_model: hustvl/yolos-tiny
tags:
- generated_from_trainer
model-index:
- name: yolo_finetuned_fruits
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7771
- Map: 0.5882
- Map 50: 0.8376
- Map 75: 0.6723
- Map Small: -1.0
- Map Medium: 0.6116
- Map Large: 0.5966
- Mar 1: 0.4201
- Mar 10: 0.7111
- Mar 100: 0.7683
- Mar Small: -1.0
- Mar Medium: 0.7071
- Mar Large: 0.7767
- Map Banana: 0.4758
- Mar 100 Banana: 0.7425
- Map Orange: 0.6281
- Mar 100 Orange: 0.8024
- Map Apple: 0.6608
- Mar 100 Apple: 0.76
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Banana | Mar 100 Banana | Map Orange | Mar 100 Orange | Map Apple | Mar 100 Apple |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|
| No log | 1.0 | 60 | 1.9700 | 0.0096 | 0.0268 | 0.0038 | -1.0 | 0.0155 | 0.0132 | 0.078 | 0.2026 | 0.3463 | -1.0 | 0.2343 | 0.3714 | 0.0132 | 0.2975 | 0.0096 | 0.3786 | 0.0058 | 0.3629 |
| No log | 2.0 | 120 | 1.6517 | 0.0553 | 0.1516 | 0.0414 | -1.0 | 0.111 | 0.0556 | 0.1359 | 0.2777 | 0.4308 | -1.0 | 0.3186 | 0.4454 | 0.0647 | 0.5175 | 0.0406 | 0.1976 | 0.0608 | 0.5771 |
| No log | 3.0 | 180 | 1.2778 | 0.1262 | 0.2428 | 0.1168 | -1.0 | 0.1877 | 0.1303 | 0.2519 | 0.5055 | 0.6286 | -1.0 | 0.5814 | 0.634 | 0.1024 | 0.6225 | 0.0983 | 0.4976 | 0.1778 | 0.7657 |
| No log | 4.0 | 240 | 1.0948 | 0.2377 | 0.4041 | 0.2352 | -1.0 | 0.4084 | 0.2402 | 0.3266 | 0.5759 | 0.7115 | -1.0 | 0.6371 | 0.7237 | 0.182 | 0.695 | 0.1717 | 0.7024 | 0.3596 | 0.7371 |
| No log | 5.0 | 300 | 1.0477 | 0.2746 | 0.4623 | 0.2895 | -1.0 | 0.2475 | 0.3142 | 0.3285 | 0.609 | 0.7315 | -1.0 | 0.6257 | 0.7458 | 0.221 | 0.7075 | 0.1828 | 0.7214 | 0.42 | 0.7657 |
| No log | 6.0 | 360 | 1.0028 | 0.3661 | 0.6059 | 0.4064 | -1.0 | 0.4221 | 0.3982 | 0.3651 | 0.6231 | 0.7251 | -1.0 | 0.6229 | 0.7379 | 0.2698 | 0.7 | 0.3568 | 0.7238 | 0.4716 | 0.7514 |
| No log | 7.0 | 420 | 0.9809 | 0.3532 | 0.5656 | 0.4002 | -1.0 | 0.4557 | 0.3731 | 0.3569 | 0.6472 | 0.7488 | -1.0 | 0.6829 | 0.7591 | 0.3239 | 0.715 | 0.3333 | 0.7714 | 0.4025 | 0.76 |
| No log | 8.0 | 480 | 0.9679 | 0.4348 | 0.6762 | 0.4868 | -1.0 | 0.5782 | 0.4375 | 0.3547 | 0.6527 | 0.7254 | -1.0 | 0.7343 | 0.7269 | 0.2877 | 0.68 | 0.4769 | 0.7619 | 0.5397 | 0.7343 |
| 1.2471 | 9.0 | 540 | 0.9173 | 0.4434 | 0.7005 | 0.5049 | -1.0 | 0.5147 | 0.4475 | 0.3646 | 0.6443 | 0.7348 | -1.0 | 0.6771 | 0.7408 | 0.3288 | 0.7225 | 0.4683 | 0.7619 | 0.5332 | 0.72 |
| 1.2471 | 10.0 | 600 | 0.8875 | 0.4834 | 0.7654 | 0.5497 | -1.0 | 0.5051 | 0.4991 | 0.369 | 0.6925 | 0.7589 | -1.0 | 0.6957 | 0.7689 | 0.3668 | 0.73 | 0.497 | 0.7952 | 0.5864 | 0.7514 |
| 1.2471 | 11.0 | 660 | 0.9261 | 0.4803 | 0.7507 | 0.5799 | -1.0 | 0.4907 | 0.4971 | 0.3818 | 0.6745 | 0.7525 | -1.0 | 0.6957 | 0.7629 | 0.3567 | 0.7175 | 0.5014 | 0.7714 | 0.5828 | 0.7686 |
| 1.2471 | 12.0 | 720 | 0.8520 | 0.4974 | 0.7451 | 0.5567 | -1.0 | 0.6198 | 0.4976 | 0.3946 | 0.691 | 0.7489 | -1.0 | 0.7157 | 0.7532 | 0.3709 | 0.7025 | 0.5588 | 0.7929 | 0.5626 | 0.7514 |
| 1.2471 | 13.0 | 780 | 0.8630 | 0.4998 | 0.7799 | 0.5682 | -1.0 | 0.546 | 0.5213 | 0.3848 | 0.6848 | 0.7519 | -1.0 | 0.6443 | 0.768 | 0.4078 | 0.7575 | 0.5624 | 0.7952 | 0.5292 | 0.7029 |
| 1.2471 | 14.0 | 840 | 0.8469 | 0.5071 | 0.776 | 0.5801 | -1.0 | 0.6247 | 0.5104 | 0.3913 | 0.7049 | 0.7579 | -1.0 | 0.6971 | 0.7682 | 0.3635 | 0.71 | 0.5271 | 0.781 | 0.6306 | 0.7829 |
| 1.2471 | 15.0 | 900 | 0.7995 | 0.5311 | 0.8059 | 0.5856 | -1.0 | 0.6156 | 0.5327 | 0.3958 | 0.7068 | 0.7576 | -1.0 | 0.7429 | 0.7592 | 0.3951 | 0.7175 | 0.5739 | 0.8095 | 0.6244 | 0.7457 |
| 1.2471 | 16.0 | 960 | 0.8150 | 0.5342 | 0.8046 | 0.6189 | -1.0 | 0.6285 | 0.5346 | 0.3974 | 0.7012 | 0.7505 | -1.0 | 0.7043 | 0.7556 | 0.4157 | 0.73 | 0.584 | 0.7929 | 0.603 | 0.7286 |
| 0.7135 | 17.0 | 1020 | 0.7887 | 0.5532 | 0.8155 | 0.6643 | -1.0 | 0.5982 | 0.5619 | 0.4184 | 0.7122 | 0.7656 | -1.0 | 0.6929 | 0.7758 | 0.4475 | 0.7425 | 0.5754 | 0.8 | 0.6365 | 0.7543 |
| 0.7135 | 18.0 | 1080 | 0.7961 | 0.5545 | 0.8237 | 0.6426 | -1.0 | 0.6024 | 0.5606 | 0.4042 | 0.7056 | 0.7583 | -1.0 | 0.6971 | 0.7648 | 0.4583 | 0.7425 | 0.6036 | 0.8095 | 0.6014 | 0.7229 |
| 0.7135 | 19.0 | 1140 | 0.7936 | 0.5726 | 0.8321 | 0.6599 | -1.0 | 0.6004 | 0.5838 | 0.4203 | 0.7209 | 0.7776 | -1.0 | 0.7071 | 0.7878 | 0.4648 | 0.75 | 0.5835 | 0.8 | 0.6695 | 0.7829 |
| 0.7135 | 20.0 | 1200 | 0.7948 | 0.5543 | 0.8208 | 0.638 | -1.0 | 0.5928 | 0.5617 | 0.4001 | 0.7032 | 0.7665 | -1.0 | 0.7 | 0.7747 | 0.4439 | 0.7525 | 0.5944 | 0.8071 | 0.6246 | 0.74 |
| 0.7135 | 21.0 | 1260 | 0.7850 | 0.5808 | 0.8357 | 0.6736 | -1.0 | 0.5831 | 0.5941 | 0.4118 | 0.7229 | 0.7766 | -1.0 | 0.7 | 0.7863 | 0.4928 | 0.765 | 0.6112 | 0.8048 | 0.6386 | 0.76 |
| 0.7135 | 22.0 | 1320 | 0.8025 | 0.5813 | 0.8356 | 0.6729 | -1.0 | 0.6177 | 0.5906 | 0.4188 | 0.7138 | 0.771 | -1.0 | 0.6871 | 0.7812 | 0.4719 | 0.755 | 0.6277 | 0.7952 | 0.6442 | 0.7629 |
| 0.7135 | 23.0 | 1380 | 0.7886 | 0.5795 | 0.83 | 0.6743 | -1.0 | 0.5957 | 0.589 | 0.4076 | 0.7065 | 0.7598 | -1.0 | 0.69 | 0.7679 | 0.4784 | 0.75 | 0.624 | 0.7952 | 0.6362 | 0.7343 |
| 0.7135 | 24.0 | 1440 | 0.8081 | 0.5787 | 0.8341 | 0.6563 | -1.0 | 0.5982 | 0.5875 | 0.4117 | 0.7084 | 0.7679 | -1.0 | 0.7114 | 0.7748 | 0.463 | 0.745 | 0.6192 | 0.7929 | 0.6538 | 0.7657 |
| 0.5383 | 25.0 | 1500 | 0.7858 | 0.5865 | 0.8318 | 0.6691 | -1.0 | 0.6285 | 0.5935 | 0.4216 | 0.7144 | 0.7729 | -1.0 | 0.7186 | 0.7792 | 0.473 | 0.75 | 0.624 | 0.8 | 0.6626 | 0.7686 |
| 0.5383 | 26.0 | 1560 | 0.7777 | 0.5935 | 0.8462 | 0.6778 | -1.0 | 0.6176 | 0.6011 | 0.4216 | 0.7151 | 0.7709 | -1.0 | 0.7143 | 0.7784 | 0.4799 | 0.7475 | 0.6363 | 0.8024 | 0.6643 | 0.7629 |
| 0.5383 | 27.0 | 1620 | 0.7821 | 0.5914 | 0.8388 | 0.6746 | -1.0 | 0.6231 | 0.5982 | 0.4209 | 0.7128 | 0.7685 | -1.0 | 0.7043 | 0.7771 | 0.4773 | 0.7375 | 0.6304 | 0.8024 | 0.6665 | 0.7657 |
| 0.5383 | 28.0 | 1680 | 0.7803 | 0.5918 | 0.8401 | 0.6739 | -1.0 | 0.6233 | 0.5987 | 0.4201 | 0.7129 | 0.7684 | -1.0 | 0.7143 | 0.7759 | 0.4768 | 0.74 | 0.6328 | 0.8024 | 0.6658 | 0.7629 |
| 0.5383 | 29.0 | 1740 | 0.7800 | 0.5886 | 0.8382 | 0.6727 | -1.0 | 0.6116 | 0.5971 | 0.4201 | 0.7111 | 0.7683 | -1.0 | 0.7071 | 0.7767 | 0.476 | 0.7425 | 0.629 | 0.8024 | 0.6608 | 0.76 |
| 0.5383 | 30.0 | 1800 | 0.7771 | 0.5882 | 0.8376 | 0.6723 | -1.0 | 0.6116 | 0.5966 | 0.4201 | 0.7111 | 0.7683 | -1.0 | 0.7071 | 0.7767 | 0.4758 | 0.7425 | 0.6281 | 0.8024 | 0.6608 | 0.76 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
riveRiPH/Violet_Magcap-12B-6bpw-h8-exl2
|
riveRiPH
| 2025-04-25T18:00:21Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"en",
"base_model:Nitral-AI/Violet_Magcap-12B",
"base_model:quantized:Nitral-AI/Violet_Magcap-12B",
"license:other",
"6-bit",
"exl2",
"region:us"
] | null | 2025-04-25T17:04:45Z |
---
base_model:
- Nitral-AI/Violet_Magcap-12B
base_model_relation: quantized
license: other
language:
- en
---
# Violet_Magcap-12B-6bpw-h8-exl2
This is a 6bpw h8 exl2 quant of [Violet_Magcap-12B](https://huggingface.co/Nitral-AI/Violet_Magcap-12B)
Built-in(default) calibration dataset is used.
|
hasdal/fe63919a-02cc-4b2e-bed0-07eafb896618
|
hasdal
| 2025-04-25T15:16:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T14:09:38Z |
---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fe63919a-02cc-4b2e-bed0-07eafb896618
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ccf83f7ddd07c30b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ccf83f7ddd07c30b_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: hasdal/fe63919a-02cc-4b2e-bed0-07eafb896618
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00022
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/ccf83f7ddd07c30b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 30
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e4ceb50d-bfa1-48e0-bf29-e8892c1eb849
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e4ceb50d-bfa1-48e0-bf29-e8892c1eb849
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fe63919a-02cc-4b2e-bed0-07eafb896618
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00022
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0020 | 1 | 1.3118 |
| 0.3989 | 1.0157 | 500 | 0.3972 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-8Bit
|
Fmuaddib
| 2025-04-25T13:09:33Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"base_model:PeterLauLukCh/Qwen2.5-14B-Instruct-o4",
"base_model:quantized:PeterLauLukCh/Qwen2.5-14B-Instruct-o4",
"license:mit",
"8-bit",
"region:us"
] | null | 2025-04-25T13:08:44Z |
---
license: mit
base_model: PeterLauLukCh/Qwen2.5-14B-Instruct-o4
tags:
- mlx
---
# Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-8Bit
The Model [Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-8Bit](https://huggingface.co/Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-8Bit) was converted to MLX format from [PeterLauLukCh/Qwen2.5-14B-Instruct-o4](https://huggingface.co/PeterLauLukCh/Qwen2.5-14B-Instruct-o4) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Fmuaddib/Qwen2.5-14B-Instruct-o4-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
IvanHU/YuLan-Mini-Instruct-8bit
|
IvanHU
| 2025-04-25T13:06:36Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"code",
"math",
"text-generation",
"conversational",
"en",
"zh",
"base_model:yulan-team/YuLan-Mini-Instruct",
"base_model:quantized:yulan-team/YuLan-Mini-Instruct",
"license:mit",
"model-index",
"8-bit",
"region:us"
] |
text-generation
| 2025-04-25T07:50:49Z |
---
license: mit
library_name: mlx
pipeline_tag: text-generation
language:
- en
- zh
tags:
- code
- math
- mlx
arxiv: 2412.17743
base_model: yulan-team/YuLan-Mini-Instruct
model-index:
- name: YuLan-Mini-Instruct
results:
- task:
type: text-generation
dataset:
name: HumanEval
type: openai_humaneval
metrics:
- type: pass@10
value: 0.866
name: pass@10
verified: false
- task:
type: text-generation
dataset:
name: MBPP
type: mbpp
metrics:
- type: pass@10
value: 0.857
name: pass@10
verified: false
- task:
type: text-generation
dataset:
name: MATH
type: math
metrics:
- type: maj@1
value: 0.552
name: maj@1
verified: false
- task:
type: text-generation
dataset:
name: GSM8K
type: gsm8k
metrics:
- type: maj@1
value: 0.717
name: maj@1
verified: false
---
# IvanHU/YuLan-Mini-Instruct-8bit
This model [IvanHU/YuLan-Mini-Instruct-8bit](https://huggingface.co/IvanHU/YuLan-Mini-Instruct-8bit) was
converted to MLX format from [yulan-team/YuLan-Mini-Instruct](https://huggingface.co/yulan-team/YuLan-Mini-Instruct)
using mlx-lm version **0.22.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("IvanHU/YuLan-Mini-Instruct-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
PhoenixB/b7ee4dc8-f35c-4be4-9cb5-381ff2d64c3c
|
PhoenixB
| 2025-04-25T10:23:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:dltjdgh0928/test_instruction",
"base_model:adapter:dltjdgh0928/test_instruction",
"license:apache-2.0",
"region:us"
] | null | 2025-04-24T23:18:03Z |
---
library_name: peft
license: apache-2.0
base_model: dltjdgh0928/test_instruction
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b7ee4dc8-f35c-4be4-9cb5-381ff2d64c3c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: dltjdgh0928/test_instruction
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7bfc1f73c89d9947_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7bfc1f73c89d9947_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: /workspace/axolotl/configs/deepspeed_stage2.json
eval_max_new_tokens: 128
eval_sample_packing: false
eval_steps: 10
eval_table_size: null
flash_attention: true
fp16: false
gpu_memory_limit: 80GiB
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: PhoenixB/b7ee4dc8-f35c-4be4-9cb5-381ff2d64c3c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 2e-4
liger_fused_linear_cross_entropy: true
liger_glu_activation: true
liger_layer_norm: true
liger_rms_norm: true
liger_rope: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/7bfc1f73c89d9947_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 32768
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd16c759-8369-461a-8cdb-f22aa44f5a17
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fd16c759-8369-461a-8cdb-f22aa44f5a17
warmup_steps: 10
weight_decay: 0.0
```
</details><br>
# b7ee4dc8-f35c-4be4-9cb5-381ff2d64c3c
This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.8925 |
| 0.4113 | 0.0013 | 10 | 0.2989 |
| 0.1745 | 0.0026 | 20 | 0.1929 |
| 0.2395 | 0.0039 | 30 | 0.1848 |
| 0.1694 | 0.0051 | 40 | 0.1609 |
| 0.122 | 0.0064 | 50 | 0.1606 |
| 0.1695 | 0.0077 | 60 | 0.1522 |
| 0.131 | 0.0090 | 70 | 0.1440 |
| 0.1814 | 0.0103 | 80 | 0.1424 |
| 0.1302 | 0.0116 | 90 | 0.1394 |
| 0.1024 | 0.0129 | 100 | 0.1392 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/bert-tiny-book-text-classifier-i1-GGUF
|
mradermacher
| 2025-04-25T10:15:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:shhossain/book-text-classifier",
"base_model:shhossain/bert-tiny-book-text-classifier",
"base_model:quantized:shhossain/bert-tiny-book-text-classifier",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"feature-extraction"
] | null | 2025-04-25T10:14:10Z |
---
base_model: shhossain/bert-tiny-book-text-classifier
datasets:
- shhossain/book-text-classifier
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/shhossain/bert-tiny-book-text-classifier
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/bert-tiny-book-text-classifier-i1-GGUF/resolve/main/bert-tiny-book-text-classifier.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jaymekoszut/sdcvsdc
|
jaymekoszut
| 2025-04-25T09:47:26Z | 0 | 0 | null |
[
"license:bsd-2-clause",
"region:us"
] | null | 2025-04-25T09:47:26Z |
---
license: bsd-2-clause
---
|
Szahriwar/Llama-3.2-3B-Instruct-bnb-4bit-elife-lora
|
Szahriwar
| 2025-04-25T09:25:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T09:25:31Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Szahriwar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AI-Enthusiast11/mistral-7b-4bit-pii-entity-extractor
|
AI-Enthusiast11
| 2025-04-25T09:11:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-24T21:52:46Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AI-Enthusiast11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
efficientscaling/Z1-Longest-7B
|
efficientscaling
| 2025-04-25T09:11:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T09:10:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
marzieh-maleki/defeasible-snli-t5-small-strengthener-tuned
|
marzieh-maleki
| 2025-04-25T09:04:31Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-25T09:04:16Z |
---
base_model: google-t5/t5-small
library_name: transformers
model_name: defeasible-snli-t5-small-strengthener-tuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for defeasible-snli-t5-small-strengthener-tuned
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="marzieh-maleki/defeasible-snli-t5-small-strengthener-tuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/marzieh-maleki-ghent-university/def_nli_baselines_sep/runs/eqqsqqc3)
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.2
- Pytorch: 2.6.0
- Datasets: 2.21.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
DiricziZsolt/ovis2-1b-5850-finetuned-damage
|
DiricziZsolt
| 2025-04-25T08:35:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:AIDC-AI/Ovis2-1B",
"base_model:adapter:AIDC-AI/Ovis2-1B",
"region:us"
] | null | 2025-04-25T08:35:04Z |
---
base_model: AIDC-AI/Ovis2-1B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
robinfaro/StandardMoE-1B-fineweb_edu-10BT
|
robinfaro
| 2025-04-25T08:22:00Z | 0 | 0 | null |
[
"safetensors",
"moegpt",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"custom_code",
"region:us"
] | null | 2025-04-25T08:19:36Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
dgambettaphd/M_llm3_gen7_run0_X_doc1000_synt64_tot128_FRESH
|
dgambettaphd
| 2025-04-25T08:16:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T08:15:58Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
firoz123/codegemma-2b-IQ3_M-GGUF
|
firoz123
| 2025-04-25T06:12:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:google/codegemma-2b",
"base_model:quantized:google/codegemma-2b",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-04-25T06:12:33Z |
---
base_model: google/codegemma-2b
library_name: transformers
license: gemma
license_link: https://ai.google.dev/gemma/terms
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# firoz123/codegemma-2b-IQ3_M-GGUF
This model was converted to GGUF format from [`google/codegemma-2b`](https://huggingface.co/google/codegemma-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/codegemma-2b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo firoz123/codegemma-2b-IQ3_M-GGUF --hf-file codegemma-2b-iq3_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo firoz123/codegemma-2b-IQ3_M-GGUF --hf-file codegemma-2b-iq3_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo firoz123/codegemma-2b-IQ3_M-GGUF --hf-file codegemma-2b-iq3_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo firoz123/codegemma-2b-IQ3_M-GGUF --hf-file codegemma-2b-iq3_m-imat.gguf -c 2048
```
|
Flo0620/Qwen2_5_7B_r8_a8_d0_2
|
Flo0620
| 2025-04-25T05:57:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T02:40:09Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r8_a8_d0_2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r8_a8_d0_2
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r8_a8_d0_2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
dgambettaphd/M_llm3_gen5_run0_X_doc1000_synt64_tot128_FRESH
|
dgambettaphd
| 2025-04-25T05:47:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T05:47:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hcwei/FRANK-ZERO-38B
|
hcwei
| 2025-04-25T05:44:56Z | 7 | 2 | null |
[
"safetensors",
"internvl_chat",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2025-03-12T15:45:27Z |
---
license: apache-2.0
---
# <span style="font-weight: bold; color: red;">FRANK-ZERO</span>: a training-FRee And Native r1-liKe multimodal LLMs
[\[📂 GitHub\]](https://github.com/hcwei13/FRANK-ZERO-Inference)

## Introduction
We have successfully migrated r1-like reasoning abilities into the vision-language joint space. Based on a unimodal language reasoning model (QwQ-32B) and a multimodal non-reasoning model (InternVL-38B), we constructed a training-FRee And Native r1-liKe multimodal reasoning model, <span style="font-weight: bold; color: red;">FRANK-ZERO</span> 38B. It achieves an accuracy of 74.1% on the MathVista test-mini benchmark, surpassing the closed-source OpenAI o1 model (73.9% vs. 74.1%), and demonstrates excellent long chain-of-thought reasoning and reflection capabilities.
## Example

**Describe this image in detail.**
<span style="font-weight: bold; color: red;">FRANK-ZERO</span> output:
```html
<think>
Okay, let me look at this chart. The title says it's about average precipitation in Christchurch, New Zealand. The x-axis has the months from January to December, and the y-axis is precipitation in millimeters. There are different colored bars for each month, each representing a different value.
First, I need to figure out what each color corresponds to. The key might be missing, but usually in these charts, each color represents a different category. Wait, actually, looking at the chart, each month has a single bar with a different color. Maybe each color is just a different month's data? Hmm, maybe not. Let me check again.
Wait, the title mentions "Average precipitation in Christchurch, New Zealand," so it's likely that each bar represents the average precipitation for each month. The colors might just be for visual distinction between the months. Let me see the values.
Starting from January: the bar is purple, and the value is around 53 mm. February is green, around 47 mm. March is red, about 53 mm. April is blue, around 51 mm. May is orange, the highest at 69 mm. June is light blue, around 67 mm. July is light green, also around 67 mm. August is teal, around 54 mm. September is pink, around 46 mm. October is dark green, around 46 mm. November is red, around 48 mm. December is blue, around 57 mm.
Wait, but the colors might actually represent different categories. Let me think again. The title says "Average precipitation," but maybe each color represents a different type of precipitation? Or perhaps different years? But the x-axis is months, so probably each bar is a month's precipitation, with colors just for differentiation.
Alternatively, maybe the chart is comparing different types of precipitation across months? But the title doesn't mention that. It just says average precipitation. Hmm, maybe the colors are just random. Let me check the values again.
Looking at the numbers:
January: ~53 mm (purple)
February: ~47 mm (green)
March: ~53 mm (red)
April: ~51 mm (blue)
May: ~69 mm (orange)
June: ~67 mm (light blue)
July: ~67 mm (light green)
August: ~54 mm (teal)
September: ~46 mm (pink)
October: ~46 mm (dark green)
November: ~48 mm (red)
December: ~57 mm (blue)
So the highest precipitation is in May at 69 mm, followed by June and July at 67 mm. The lowest is in September and October at 46 mm. The chart shows that precipitation peaks in late spring and early summer, then decreases towards autumn and winter.
The question is probably asking to describe the trend. The chart shows that precipitation is highest in May, June, and July, then decreases in August, and continues to drop in September and October. November and December see a slight increase again but not as high as the summer months.
So the main point is that the highest precipitation occurs in the late spring and early summer months (May to July), with the lowest in autumn (September and October). The rest of the months have moderate precipitation levels.
I need to make sure I didn't misinterpret the colors. Since each month has a different color, it's just for visual separation. The key might have been omitted, but the data is clear from the bars' heights.
Another thing to note is that the chart uses different colors for each month, which might be unnecessary but just for aesthetics. The important part is the height of each bar indicating precipitation in mm.
So the answer should summarize that precipitation peaks in May, June, and July, then decreases, with the lowest in September and October. The rest of the months have varying levels but not as extreme.
</think>
The bar chart displays the average precipitation in Christchurch, New Zealand, across 12 months. Each month is represented by a differently colored bar, with the height indicating precipitation in millimeters (mm).
- **Key Observations:**
- **Highest Precipitation:** May (69 mm), June (67 mm), and July (67 mm) show the highest precipitation, indicating a peak during late spring and early summer.
- **Lowest Precipitation:** September (46 mm) and October (46 mm) have the lowest precipitation, marking a significant drop in autumn.
- **Moderate Levels:** Other months (January to April, August, November, and December) have precipitation ranging between 46–57 mm, with some fluctuations but no extreme highs or lows.
- **Trend:** The chart reveals a clear seasonal pattern where precipitation peaks in the late spring/early summer months (May–July), decreases sharply in autumn (September–October), and stabilizes at moderate levels during the rest of the year. This suggests Christchurch experiences its wettest period from May to July and driest in September and October.
Overall, the data highlights seasonal variations in precipitation, with the highest rainfall occurring in the early part of the year and the lowest in late autumn.
```
## License
This project is released under the QwQ-32B (License: Apache-2.0) and InternVL2.5 (License: MIT) licenses. Portions of this project contain code and models from other sources, which are subject to their respective licenses.
## Acknowledgement
This code base is mainly built upon [InternVL2.5](https://github.com/OpenGVLab/InternVL). Thanks for their awesome work!
We would also like to recognize and commend the following open source projects (e.g., [Qwen2.5VL](https://github.com/QwenLM/Qwen2.5-VL), [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)), thank you for your great contribution to the open source community.
|
MayBashendy/arabic_SDP_all_binary_multilingual_e5_small_lr3e-05_targ4_dev1234678
|
MayBashendy
| 2025-04-25T04:28:26Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-25T03:29:27Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
hackelle/mobilenetv4_hybrid_medium-s2-v0.2.0
|
hackelle
| 2025-04-25T04:10:17Z | 0 | 0 |
configilm
|
[
"configilm",
"safetensors",
"mobilenetv4_hybrid_medium",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
] |
image-classification
| 2025-04-25T04:10:09Z |
---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- mobilenetv4_hybrid_medium
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 0.000000
- label: Beaches, dunes, sands
score: 0.000000
- label: Broad-leaved forest
score: 0.000000
- label: Coastal wetlands
score: 0.000000
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Mobilenetv4_hybrid_medium pretrained on BigEarthNet v2.0 using Sentinel-2 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-2 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 1000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 29 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.691085 | 0.854498 |
| F1 Score | 0.622194 | 0.754662 |
| Precision | 0.740795 | 0.796592 |
# Example
| A Sentinel-2 image (true color representation) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000000 <br> 0.000000 <br> ... <br> 0.000000 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/mobilenetv4_hybrid_medium-s2-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
gartland/openwebtext-24K-tokenizer
|
gartland
| 2025-04-25T04:01:21Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T04:01:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Tirly/145333
|
Tirly
| 2025-04-25T03:17:01Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-25T03:15:16Z |
---
license: other
license_name: egre
license_link: LICENSE
---
|
Ans0nWr0ng/llama3.1-8b-cantonese_gguf_v3
|
Ans0nWr0ng
| 2025-04-25T03:11:23Z | 94 | 1 | null |
[
"gguf",
"text-generation",
"dataset:stvlynn/Cantonese-Dialogue",
"dataset:hon9kon9ize/yue-alpaca",
"dataset:cantonesesra/Cantonese_AllAspectQA_11K",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-09T01:57:45Z |
---
license: llama3.1
datasets:
- stvlynn/Cantonese-Dialogue
- hon9kon9ize/yue-alpaca
- cantonesesra/Cantonese_AllAspectQA_11K
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
|
PhoenixB/c025fa62-b558-4548-a97e-6325a75fade3
|
PhoenixB
| 2025-04-25T00:53:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-04-25T00:49:23Z |
---
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
library_name: transformers
model_name: c025fa62-b558-4548-a97e-6325a75fade3
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for c025fa62-b558-4548-a97e-6325a75fade3
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="PhoenixB/c025fa62-b558-4548-a97e-6325a75fade3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients-On-Demand/runs/jbrtp75x)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LuckyLukke/grpo_onesided_5-320
|
LuckyLukke
| 2025-04-24T23:53:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T23:50:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MikuMasterRace/Omutopia_Pastel_Puffies_diaper_-_ABDL_-_IllustriousXL_v1
|
MikuMasterRace
| 2025-04-24T20:24:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"dataset:MikuMasterRace/Omutopia_Pastel_Puffies_diaper_-_ABDL_v1",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:adapter:OnomaAIResearch/Illustrious-xl-early-release-v0",
"region:us"
] |
text-to-image
| 2025-04-24T20:16:29Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/ComfyUI_(hiresfix)_2025-02-16_00000_14.png
- text: '-'
output:
url: images/ComfyUI_(hiresfix)_2025-02-16_00000_6.png
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
instance_prompt: null
datasets:
- MikuMasterRace/Omutopia_Pastel_Puffies_diaper_-_ABDL_v1
---
# Omutopia Pastel Puffies diaper / おむつ (ABDL) v1 [IllustriousXL 0.1]
<Gallery />
## Reference
[Omutopia Pastel Puffies](https://omutopia.com/en/products/%E6%95%B0%E9%87%8F%E9%99%90%E5%AE%9A%E6%97%A9%E5%89%B2%E3%83%97%E3%83%A9%E3%83%B3%E3%81%82%E3%82%8A-pastel-puffies-%E5%A4%A7%E4%BA%BA%E7%94%A8%E3%81%8A%E3%82%80%E3%81%A4-%E3%83%86%E3%83%BC%E3%83%97%E5%BC%8F-omutopia%E3%83%97%E3%83%AD%E3%82%B8%E3%82%A7%E3%82%AF%E3%83%88) diaper

## Prompting
Main triggerwords:
```
omutopia pastelpuffies, diaper
```
Sub tags:
```
front-print diaper, back-print diaper
```
## Download model
Weights for this model are available in Safetensors format.
[Download](/MikuMasterRace/Omutopia_Pastel_Puffies_diaper_-_ABDL_-_IllustriousXL_v1/tree/main) them in the Files & versions tab.
|
egerber1/classifier-de1
|
egerber1
| 2025-04-24T17:12:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-german-cased",
"base_model:finetune:distilbert/distilbert-base-german-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-24T17:12:31Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-german-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: classifier-de1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-de1
This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3485
- Accuracy: 0.8738
- Precision: 0.4859
- Recall: 0.3069
- F1: 0.3762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3406 | 0.0513 | 500 | 0.3753 | 0.8760 | 0.0 | 0.0 | 0.0 |
| 0.3251 | 0.1025 | 1000 | 0.3678 | 0.8760 | 0.0 | 0.0 | 0.0 |
| 0.2989 | 0.1538 | 1500 | 0.3666 | 0.8756 | 0.2806 | 0.0021 | 0.0042 |
| 0.2989 | 0.2050 | 2000 | 0.3648 | 0.8734 | 0.4034 | 0.0430 | 0.0776 |
| 0.2922 | 0.2563 | 2500 | 0.3626 | 0.8746 | 0.4528 | 0.0545 | 0.0973 |
| 0.2757 | 0.3075 | 3000 | 0.3647 | 0.8690 | 0.3960 | 0.1072 | 0.1687 |
| 0.29 | 0.3588 | 3500 | 0.3584 | 0.8706 | 0.4192 | 0.1139 | 0.1791 |
| 0.2587 | 0.4100 | 4000 | 0.3690 | 0.8707 | 0.4287 | 0.1275 | 0.1965 |
| 0.2654 | 0.4613 | 4500 | 0.3626 | 0.8705 | 0.4310 | 0.1387 | 0.2098 |
| 0.2658 | 0.5125 | 5000 | 0.3585 | 0.8758 | 0.4958 | 0.1114 | 0.1820 |
| 0.2523 | 0.5638 | 5500 | 0.3527 | 0.8725 | 0.4556 | 0.1445 | 0.2194 |
| 0.2621 | 0.6150 | 6000 | 0.3522 | 0.8750 | 0.4855 | 0.1308 | 0.2061 |
| 0.2501 | 0.6663 | 6500 | 0.3556 | 0.8594 | 0.3934 | 0.2469 | 0.3034 |
| 0.2318 | 0.7175 | 7000 | 0.3536 | 0.8771 | 0.5181 | 0.1297 | 0.2075 |
| 0.2362 | 0.7688 | 7500 | 0.3424 | 0.8776 | 0.5279 | 0.1201 | 0.1956 |
| 0.2351 | 0.8200 | 8000 | 0.3354 | 0.8731 | 0.4723 | 0.2014 | 0.2823 |
| 0.2153 | 0.8713 | 8500 | 0.3426 | 0.8775 | 0.5198 | 0.1573 | 0.2416 |
| 0.215 | 0.9225 | 9000 | 0.3384 | 0.8785 | 0.5416 | 0.1323 | 0.2127 |
| 0.2177 | 0.9738 | 9500 | 0.3353 | 0.8749 | 0.4891 | 0.2040 | 0.2879 |
| 0.2173 | 1.0250 | 10000 | 0.3303 | 0.8729 | 0.4737 | 0.2243 | 0.3044 |
| 0.2128 | 1.0763 | 10500 | 0.3363 | 0.8770 | 0.5125 | 0.1677 | 0.2527 |
| 0.2093 | 1.1275 | 11000 | 0.3354 | 0.8720 | 0.4693 | 0.2471 | 0.3238 |
| 0.2022 | 1.1788 | 11500 | 0.3349 | 0.8752 | 0.4929 | 0.2122 | 0.2967 |
| 0.1978 | 1.2300 | 12000 | 0.3382 | 0.8722 | 0.4700 | 0.2421 | 0.3196 |
| 0.1974 | 1.2813 | 12500 | 0.3265 | 0.8753 | 0.4930 | 0.1923 | 0.2767 |
| 0.2185 | 1.3325 | 13000 | 0.3458 | 0.8755 | 0.4951 | 0.2055 | 0.2904 |
| 0.1973 | 1.3838 | 13500 | 0.3472 | 0.8738 | 0.4824 | 0.2482 | 0.3278 |
| 0.1946 | 1.4350 | 14000 | 0.3367 | 0.8779 | 0.5203 | 0.1915 | 0.2799 |
| 0.1986 | 1.4863 | 14500 | 0.3394 | 0.8717 | 0.4704 | 0.2750 | 0.3471 |
| 0.1922 | 1.5375 | 15000 | 0.3310 | 0.8770 | 0.5090 | 0.2321 | 0.3188 |
| 0.1765 | 1.5888 | 15500 | 0.3584 | 0.8797 | 0.5454 | 0.1779 | 0.2682 |
| 0.2039 | 1.6400 | 16000 | 0.3279 | 0.8774 | 0.5128 | 0.2290 | 0.3166 |
| 0.2051 | 1.6913 | 16500 | 0.3302 | 0.8794 | 0.5376 | 0.1970 | 0.2883 |
| 0.1868 | 1.7425 | 17000 | 0.3222 | 0.8763 | 0.5021 | 0.2498 | 0.3336 |
| 0.1972 | 1.7938 | 17500 | 0.3296 | 0.8685 | 0.4564 | 0.3163 | 0.3737 |
| 0.1932 | 1.8450 | 18000 | 0.3185 | 0.8776 | 0.5136 | 0.2399 | 0.3270 |
| 0.1797 | 1.8963 | 18500 | 0.3231 | 0.8768 | 0.5064 | 0.2446 | 0.3298 |
| 0.1835 | 1.9475 | 19000 | 0.3230 | 0.8748 | 0.4913 | 0.2729 | 0.3509 |
| 0.1767 | 1.9988 | 19500 | 0.3286 | 0.8756 | 0.4970 | 0.2566 | 0.3385 |
| 0.192 | 2.0500 | 20000 | 0.3304 | 0.8781 | 0.5183 | 0.2405 | 0.3285 |
| 0.1795 | 2.1013 | 20500 | 0.3333 | 0.8793 | 0.5326 | 0.2145 | 0.3059 |
| 0.1716 | 2.1525 | 21000 | 0.3499 | 0.8760 | 0.4998 | 0.2685 | 0.3493 |
| 0.177 | 2.2038 | 21500 | 0.3329 | 0.8775 | 0.5127 | 0.2395 | 0.3265 |
| 0.1541 | 2.2550 | 22000 | 0.3323 | 0.8781 | 0.5182 | 0.2444 | 0.3321 |
| 0.1725 | 2.3063 | 22500 | 0.3384 | 0.8799 | 0.5423 | 0.2033 | 0.2958 |
| 0.182 | 2.3575 | 23000 | 0.3326 | 0.8777 | 0.5138 | 0.2551 | 0.3409 |
| 0.1575 | 2.4088 | 23500 | 0.3373 | 0.8781 | 0.5188 | 0.2381 | 0.3264 |
| 0.1735 | 2.4600 | 24000 | 0.3436 | 0.8795 | 0.5331 | 0.2280 | 0.3194 |
| 0.1545 | 2.5113 | 24500 | 0.3400 | 0.8804 | 0.5447 | 0.2180 | 0.3114 |
| 0.1592 | 2.5625 | 25000 | 0.3422 | 0.8790 | 0.5272 | 0.2348 | 0.3249 |
| 0.1395 | 2.6138 | 25500 | 0.3583 | 0.8796 | 0.5358 | 0.2177 | 0.3096 |
| 0.1543 | 2.6650 | 26000 | 0.3341 | 0.8791 | 0.5296 | 0.2257 | 0.3165 |
| 0.1811 | 2.7163 | 26500 | 0.3245 | 0.8764 | 0.5032 | 0.2790 | 0.3589 |
| 0.1564 | 2.7675 | 27000 | 0.3395 | 0.8789 | 0.5246 | 0.2485 | 0.3373 |
| 0.1585 | 2.8188 | 27500 | 0.3465 | 0.8787 | 0.5221 | 0.2571 | 0.3445 |
| 0.1642 | 2.8700 | 28000 | 0.3545 | 0.8811 | 0.5508 | 0.2230 | 0.3174 |
| 0.1633 | 2.9213 | 28500 | 0.3339 | 0.8755 | 0.4963 | 0.2942 | 0.3694 |
| 0.1663 | 2.9725 | 29000 | 0.3398 | 0.8781 | 0.5166 | 0.2682 | 0.3531 |
| 0.136 | 3.0238 | 29500 | 0.3607 | 0.8807 | 0.5466 | 0.2240 | 0.3178 |
| 0.1409 | 3.0750 | 30000 | 0.3660 | 0.8793 | 0.5304 | 0.2336 | 0.3244 |
| 0.1474 | 3.1263 | 30500 | 0.3519 | 0.8763 | 0.5026 | 0.2635 | 0.3457 |
| 0.1505 | 3.1775 | 31000 | 0.3485 | 0.8738 | 0.4859 | 0.3069 | 0.3762 |
| 0.133 | 3.2288 | 31500 | 0.3578 | 0.8797 | 0.5357 | 0.2263 | 0.3182 |
| 0.1438 | 3.2800 | 32000 | 0.3455 | 0.8758 | 0.4985 | 0.2839 | 0.3617 |
| 0.1591 | 3.3313 | 32500 | 0.3373 | 0.8749 | 0.4929 | 0.3033 | 0.3755 |
| 0.1738 | 3.3825 | 33000 | 0.3446 | 0.8781 | 0.5169 | 0.2656 | 0.3509 |
| 0.1683 | 3.4338 | 33500 | 0.3380 | 0.8776 | 0.5123 | 0.2721 | 0.3554 |
| 0.1567 | 3.4850 | 34000 | 0.3493 | 0.8799 | 0.5338 | 0.2481 | 0.3387 |
| 0.1388 | 3.5363 | 34500 | 0.3463 | 0.8791 | 0.5255 | 0.2557 | 0.3440 |
| 0.15 | 3.5875 | 35000 | 0.3391 | 0.8811 | 0.5454 | 0.2465 | 0.3396 |
| 0.1478 | 3.6388 | 35500 | 0.3465 | 0.8799 | 0.5327 | 0.2544 | 0.3444 |
| 0.1359 | 3.6900 | 36000 | 0.3705 | 0.8798 | 0.5321 | 0.2515 | 0.3416 |
| 0.1502 | 3.7413 | 36500 | 0.3386 | 0.8790 | 0.5236 | 0.2653 | 0.3522 |
| 0.1387 | 3.7925 | 37000 | 0.3514 | 0.8789 | 0.5227 | 0.2719 | 0.3577 |
| 0.1484 | 3.8438 | 37500 | 0.3391 | 0.8805 | 0.5432 | 0.2283 | 0.3215 |
| 0.154 | 3.8950 | 38000 | 0.3584 | 0.8807 | 0.5456 | 0.2259 | 0.3195 |
| 0.1395 | 3.9463 | 38500 | 0.3403 | 0.8779 | 0.5137 | 0.2804 | 0.3628 |
| 0.1429 | 3.9975 | 39000 | 0.3467 | 0.8783 | 0.5172 | 0.2747 | 0.3588 |
| 0.1278 | 4.0488 | 39500 | 0.3581 | 0.8793 | 0.5272 | 0.2609 | 0.3491 |
| 0.1582 | 4.1000 | 40000 | 0.3483 | 0.8783 | 0.5179 | 0.2719 | 0.3566 |
| 0.1174 | 4.1513 | 40500 | 0.3587 | 0.8794 | 0.5279 | 0.2604 | 0.3487 |
| 0.1363 | 4.2025 | 41000 | 0.3594 | 0.8800 | 0.5347 | 0.2514 | 0.3420 |
| 0.1361 | 4.2538 | 41500 | 0.3664 | 0.8806 | 0.5414 | 0.2426 | 0.3350 |
| 0.1299 | 4.3050 | 42000 | 0.3603 | 0.8792 | 0.5258 | 0.2606 | 0.3485 |
| 0.1443 | 4.3563 | 42500 | 0.3705 | 0.8796 | 0.5296 | 0.2616 | 0.3502 |
| 0.1417 | 4.4075 | 43000 | 0.3611 | 0.8800 | 0.5350 | 0.2455 | 0.3366 |
| 0.1354 | 4.4588 | 43500 | 0.3523 | 0.8792 | 0.5249 | 0.2735 | 0.3596 |
| 0.1474 | 4.5100 | 44000 | 0.3683 | 0.8812 | 0.5481 | 0.2384 | 0.3323 |
| 0.1398 | 4.5613 | 44500 | 0.3537 | 0.8800 | 0.5328 | 0.2599 | 0.3494 |
| 0.1558 | 4.6125 | 45000 | 0.3529 | 0.8804 | 0.5391 | 0.2466 | 0.3384 |
| 0.1479 | 4.6638 | 45500 | 0.3489 | 0.8794 | 0.5270 | 0.2640 | 0.3518 |
| 0.1454 | 4.7150 | 46000 | 0.3618 | 0.8798 | 0.5309 | 0.2620 | 0.3508 |
| 0.1327 | 4.7663 | 46500 | 0.3634 | 0.8807 | 0.5423 | 0.2444 | 0.3369 |
| 0.1427 | 4.8175 | 47000 | 0.3578 | 0.8784 | 0.5175 | 0.2836 | 0.3664 |
| 0.1361 | 4.8688 | 47500 | 0.3531 | 0.8794 | 0.5272 | 0.2693 | 0.3565 |
| 0.1303 | 4.9200 | 48000 | 0.3636 | 0.8789 | 0.5231 | 0.2627 | 0.3498 |
| 0.1373 | 4.9713 | 48500 | 0.3528 | 0.8791 | 0.5252 | 0.2628 | 0.3503 |
| 0.1339 | 5.0226 | 49000 | 0.3662 | 0.8795 | 0.5286 | 0.2631 | 0.3513 |
| 0.1449 | 5.0738 | 49500 | 0.3603 | 0.8773 | 0.5095 | 0.2778 | 0.3596 |
| 0.1295 | 5.1251 | 50000 | 0.3811 | 0.8795 | 0.5284 | 0.2616 | 0.3499 |
| 0.1372 | 5.1763 | 50500 | 0.3637 | 0.8769 | 0.5065 | 0.2885 | 0.3676 |
| 0.1381 | 5.2276 | 51000 | 0.3629 | 0.8784 | 0.5176 | 0.2833 | 0.3662 |
| 0.1334 | 5.2788 | 51500 | 0.3639 | 0.8788 | 0.5219 | 0.2672 | 0.3535 |
| 0.1422 | 5.3301 | 52000 | 0.3694 | 0.8779 | 0.5147 | 0.2729 | 0.3566 |
| 0.1413 | 5.3813 | 52500 | 0.3610 | 0.8773 | 0.5097 | 0.2822 | 0.3633 |
| 0.1487 | 5.4326 | 53000 | 0.3650 | 0.8778 | 0.5136 | 0.2736 | 0.3570 |
| 0.1431 | 5.4838 | 53500 | 0.3704 | 0.8797 | 0.5309 | 0.2567 | 0.3461 |
| 0.142 | 5.5351 | 54000 | 0.3637 | 0.8794 | 0.5278 | 0.2607 | 0.3490 |
| 0.1406 | 5.5863 | 54500 | 0.3670 | 0.8790 | 0.5243 | 0.2641 | 0.3512 |
| 0.1484 | 5.6376 | 55000 | 0.3608 | 0.8775 | 0.5109 | 0.2793 | 0.3612 |
| 0.1433 | 5.6888 | 55500 | 0.3652 | 0.8787 | 0.5211 | 0.2705 | 0.3562 |
| 0.1219 | 5.7401 | 56000 | 0.3655 | 0.8782 | 0.5165 | 0.2759 | 0.3597 |
| 0.1344 | 5.7913 | 56500 | 0.3662 | 0.8790 | 0.5242 | 0.2649 | 0.3519 |
| 0.1598 | 5.8426 | 57000 | 0.3684 | 0.8787 | 0.5208 | 0.2727 | 0.3580 |
| 0.1287 | 5.8938 | 57500 | 0.3659 | 0.8791 | 0.5240 | 0.2692 | 0.3556 |
| 0.1182 | 5.9451 | 58000 | 0.3671 | 0.8793 | 0.5263 | 0.2657 | 0.3531 |
| 0.1242 | 5.9963 | 58500 | 0.3650 | 0.8790 | 0.5234 | 0.2693 | 0.3556 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
sapna-kumari-shah-videos/18-video.sapna.shah.viral.video.original.here
|
sapna-kumari-shah-videos
| 2025-04-24T16:49:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-24T16:49:15Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
withpi/pi_scorer_ce_bert_v3_init_84000
|
withpi
| 2025-04-24T16:48:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-24T16:47:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alfonsusrr/qwen2.5-7b-lora-sft-disc-law
|
alfonsusrr
| 2025-04-24T12:23:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T12:03:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AryanManchanda/the_model
|
AryanManchanda
| 2025-04-24T10:04:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T06:37:18Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AryanManchanda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
922CA/gem-monika-ddlc-2b
|
922CA
| 2025-04-24T09:28:49Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-05T05:15:58Z |
---
base_model: unsloth/gemma-2-2b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
---
# gem-monika-ddlc-2b (AKA Lilmonix2b-v1):
* [Fine-tune](https://huggingface.co/unsloth/gemma-2-2b-bnb-4bit) for Monika character from DDLC
* Fine-tuned on a [dataset of ~600+ items](https://huggingface.co/datasets/922-CA/MoCha_v1) (dialogue scraped from game, reddit, and Twitter synthetically augmented by turn each into snippets of multi-turn chat dialogue between Player and Monika; this was then manually edited, with more manually crafted items including info about character added in)
* [GG](https://huggingface.co/922CA/gem-monika-ddlc-2b-gguf)[UF](https://huggingface.co/mradermacher/gem-monika-ddlc-2b-GGUF)
# USAGE
This is meant to be mainly a chat model with limited RP ability.
For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so:
\nPlayer: (prompt)\nMonika:
# HYPERPARAMS
* Tuned for 1 epoch
* rank: 32
* lora alpha: 32
* lora dropout: 0.5
* lr: 2e-4
* batch size: 2
* warmup ratio: 0.1
* grad steps: 4
# WARNINGS AND DISCLAIMERS
This model is meant to closely reflect the characteristics of Monika. Despite this, there is always the chance that "Monika" will hallucinate and get information about herself wrong or act out of character.
Additionally, being character-focused means that this model may have lost some assistant capability for some specific tasks.
Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk!
|
shekharashishraj/gemma-context-aware-summarization_42325
|
shekharashishraj
| 2025-04-24T05:26:59Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-3-1b-pt",
"base_model:adapter:google/gemma-3-1b-pt",
"license:gemma",
"region:us"
] | null | 2025-04-24T02:50:01Z |
---
library_name: peft
license: gemma
base_model: google/gemma-3-1b-pt
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: gemma-context-aware-summarization_42325
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-context-aware-summarization_42325
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
zhan1993/merged_model_hf_phi-3
|
zhan1993
| 2025-04-24T03:14:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T03:10:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ready2Work/llama3.2_3B_news_merged
|
Ready2Work
| 2025-04-24T02:50:12Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-24T02:50:12Z |
---
license: apache-2.0
---
|
baby-dev/35ff8131-4e2e-4383-b0e3-185a7cb85c2f
|
baby-dev
| 2025-04-23T19:53:46Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T19:51:51Z |
---
library_name: transformers
model_name: baby-dev/35ff8131-4e2e-4383-b0e3-185a7cb85c2f
tags:
- generated_from_trainer
licence: license
---
# Model Card for baby-dev/35ff8131-4e2e-4383-b0e3-185a7cb85c2f
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
davidgparedes/gemma-auto-teacher
|
davidgparedes
| 2025-04-23T12:03:57Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-04-14T17:13:05Z |
---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-auto-teacher
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-auto-teacher
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="davidgparedes/gemma-auto-teacher", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
softaken/EML_Attachment_Extractor
|
softaken
| 2025-04-23T11:53:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-23T11:50:32Z |
Softaken EML Attachment Extractor is an advanced way to remove attachments from EML and EMLX files. It can extract attachments without changing the EML file. Let users extract attachments of any size from massive EML files. Extraction poses no risk of data loss or file corruption. This program is also simple to use for beginners, so even those without a technical background can utilize it. Users can preview the EML files before extracting them, allowing them to preview exactly what will be extracted. It supports all popular EML-based email clients like Windows Live Mail, Thunderbird, Outlook Express, etc. It allows the user to save the removed attachments to the desired location. It also handles large-sized EML files with ease. It is compatible with all the latest Windows OS versions, including Windows 11, 10, 8.1,8, 7, XP, Vista, etc. A free trial version of this extractor is also available, which users can download and test at no cost. The trial version helps users to understand the features and functioning of the software before actual usage. This tool also comes with a reliable support system; the dedicated technical support team is always ready to help the user, whether users have problems with software installation, understanding the activation process or any confusion during extraction.
Read More: https://www.softaken.com/eml-attachment-extractor
|
Xubi23/trainer_output
|
Xubi23
| 2025-04-22T18:10:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-04-22T17:05:31Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: trainer_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Chdn1985/chanda
|
Chdn1985
| 2025-04-22T03:11:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-22T03:11:32Z |
---
license: apache-2.0
---
|
leodonkikonki/grokuku
|
leodonkikonki
| 2025-04-21T04:46:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-21T04:46:14Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: grokuku
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# grokuku
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `grokuku` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
dekos2606/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_aquatic_dove
|
dekos2606
| 2025-04-19T11:13:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am beaked aquatic dove",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-19T11:11:51Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_aquatic_dove
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am beaked aquatic dove
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_aquatic_dove
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dekos2606/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_aquatic_dove", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.