modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-08 18:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-08 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dzanbek/f8bb802c-9cae-4b39-adca-e20c459c1122
|
dzanbek
| 2025-04-29T20:15:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T20:02:49Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f8bb802c-9cae-4b39-adca-e20c459c1122
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 43f0fbfc1fa5380d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/43f0fbfc1fa5380d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/f8bb802c-9cae-4b39-adca-e20c459c1122
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/43f0fbfc1fa5380d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7935b42e-be23-4573-ac9f-cf91fed4d1ad
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 7935b42e-be23-4573-ac9f-cf91fed4d1ad
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f8bb802c-9cae-4b39-adca-e20c459c1122
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.0812 | 0.0117 | 200 | 3.9253 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
paro-aarti-viral-video1/Btswiki.com.paro.aarti.viral.video.link.original.telegram
|
paro-aarti-viral-video1
| 2025-04-29T20:14:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T20:13:02Z |
<a href="https://zydran.cfd/ewr4fwesc"> 🌐 Click Here To link (Full Viral Video Link)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://zydran.cfd/ewr4fwesc"> 🌐 Click Here To link
|
MrRobotoAI/F6
|
MrRobotoAI
| 2025-04-29T20:13:54Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:MrRobotoAI/B6",
"base_model:merge:MrRobotoAI/B6",
"base_model:MrRobotoAI/B8",
"base_model:merge:MrRobotoAI/B8",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T11:06:46Z |
---
base_model:
- MrRobotoAI/B6
- MrRobotoAI/B8
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/B6](https://huggingface.co/MrRobotoAI/B6) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/B8](https://huggingface.co/MrRobotoAI/B8)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
models:
- model: MrRobotoAI/B6
parameters:
weight:
- filter: v_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: o_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: up_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: gate_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- filter: down_proj
value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8]
- value: 1
- model: MrRobotoAI/B8
parameters:
weight:
- filter: v_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: o_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: up_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: gate_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- filter: down_proj
value: [0.2, 0.2, 0.5, 0.4, 0.3, 0.2, 0.3, 0.4, 0.5, 0.2, 0.2]
- value: 0
base_model: MrRobotoAI/B6
dtype: bfloat16
```
|
aydndglr/alfa_v3_2
|
aydndglr
| 2025-04-29T20:13:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T20:05:58Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** aydndglr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
infogep/b9feeaf5-0ee6-4ae6-9caf-66e820526703
|
infogep
| 2025-04-29T20:10:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T20:04:44Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b9feeaf5-0ee6-4ae6-9caf-66e820526703
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 43f0fbfc1fa5380d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/43f0fbfc1fa5380d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: infogep/b9feeaf5-0ee6-4ae6-9caf-66e820526703
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/43f0fbfc1fa5380d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7935b42e-be23-4573-ac9f-cf91fed4d1ad
wandb_project: s56-30
wandb_run: your_name
wandb_runid: 7935b42e-be23-4573-ac9f-cf91fed4d1ad
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b9feeaf5-0ee6-4ae6-9caf-66e820526703
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.2622 | 0.0117 | 200 | 4.1052 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
infogeo/c2eede96-c0b3-4473-b747-1d1ba8a7b79d
|
infogeo
| 2025-04-29T20:09:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T20:05:22Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c2eede96-c0b3-4473-b747-1d1ba8a7b79d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 43f0fbfc1fa5380d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/43f0fbfc1fa5380d_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/c2eede96-c0b3-4473-b747-1d1ba8a7b79d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/43f0fbfc1fa5380d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7935b42e-be23-4573-ac9f-cf91fed4d1ad
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 7935b42e-be23-4573-ac9f-cf91fed4d1ad
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c2eede96-c0b3-4473-b747-1d1ba8a7b79d
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.0451 | 0.0088 | 150 | 5.5601 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
mlx-community/Josiefied-Qwen3-1.7B-abliterated-v1-4bit
|
mlx-community
| 2025-04-29T20:07:16Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"chat",
"text-generation",
"conversational",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1",
"4-bit",
"region:us"
] |
text-generation
| 2025-04-29T19:54:25Z |
---
tags:
- chat
- mlx
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1
pipeline_tag: text-generation
library_name: mlx
---
# mlx-community/Josiefied-Qwen3-1.7B-abliterated-v1-4bit
This model [mlx-community/Josiefied-Qwen3-1.7B-abliterated-v1-4bit](https://huggingface.co/mlx-community/Josiefied-Qwen3-1.7B-abliterated-v1-4bit) was
converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Josiefied-Qwen3-1.7B-abliterated-v1-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
jmalejandrob79/nrmexp03
|
jmalejandrob79
| 2025-04-29T20:06:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-29T12:01:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmexp03
---
# Nrmexp03
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmexp03` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmexp03",
"lora_weights": "https://huggingface.co/jmalejandrob79/nrmexp03/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/nrmexp03', weight_name='lora.safetensors')
image = pipeline('nrmexp03').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 5000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jmalejandrob79/nrmexp03/discussions) to add images that show off what you’ve made with this LoRA.
|
Elio5074/emiliomodel1
|
Elio5074
| 2025-04-29T20:04:11Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-21T16:42:08Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
10-Shah-Sapna-Kumari-Viral-Video-Full-Clip/FuLL.Clip.Sapna.Shah.Viral.Video.Link.Original.Link
|
10-Shah-Sapna-Kumari-Viral-Video-Full-Clip
| 2025-04-29T20:00:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:59:42Z |
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
OumaymaELBIACH/Results_biomistral_smm4h_v2
|
OumaymaELBIACH
| 2025-04-29T20:00:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:BioMistral/BioMistral-7B",
"base_model:finetune:BioMistral/BioMistral-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T20:00:02Z |
---
base_model: BioMistral/BioMistral-7B
library_name: transformers
model_name: Results_biomistral_smm4h_v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Results_biomistral_smm4h_v2
This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="OumaymaELBIACH/Results_biomistral_smm4h_v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MaziyarPanahi/Qwen3-30B-A3B-GGUF
|
MaziyarPanahi
| 2025-04-29T19:59:26Z | 0 | 1 | null |
[
"gguf",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"region:us",
"conversational"
] |
text-generation
| 2025-04-29T14:05:00Z |
---
base_model: Qwen/Qwen3-30B-A3B
inference: false
model_creator: Qwen
model_name: Qwen3-30B-A3B-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/Qwen3-30B-A3B-GGUF](https://huggingface.co/MaziyarPanahi/Qwen3-30B-A3B-GGUF)
- Model creator: [Qwen](https://huggingface.co/Qwen)
- Original model: [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)
## Description
[MaziyarPanahi/Qwen3-30B-A3B-GGUF](https://huggingface.co/MaziyarPanahi/Qwen3-30B-A3B-GGUF) contains GGUF format model files for [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
Paro-Aarti-Cx/Go.Viral.Paro.Aarti.Viral.Video.Link
|
Paro-Aarti-Cx
| 2025-04-29T19:57:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:55:56Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Paro-Aarti)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Paro-Aarti)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Paro-Aarti)
|
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_15-17_21-23_27-29_lrx0_5
|
annasoli
| 2025-04-29T19:57:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T19:36:53Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xerces101/Nagamese-English-Translator
|
xerces101
| 2025-04-29T19:52:47Z | 0 | 0 | null |
[
"safetensors",
"m2m_100",
"LangaugeTranslation",
"Nagamese",
"English",
"Seq2seq",
"text2text-generation",
"license:mit",
"region:us"
] |
text2text-generation
| 2025-04-28T17:10:10Z |
---
license: mit
pipeline_tag: text2text-generation
tags:
- LangaugeTranslation
- Nagamese
- English
- Seq2seq
---
|
bayusapta22/bays
|
bayusapta22
| 2025-04-29T19:50:29Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T19:50:29Z |
---
license: apache-2.0
---
|
10-Arovi-Nusrat-Ridhi-Viral-Videos-rock/Original.Viral.Clip.Arovi.Nusrat.Ridhi.Viral.Video.Leaks.official
|
10-Arovi-Nusrat-Ridhi-Viral-Videos-rock
| 2025-04-29T19:50:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:23:44Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Shah-Sapna)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Shah-Sapna)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Shah-Sapna)
|
ZhuangXialie/Qwen-code-7B-SFT-100k-v2-lora
|
ZhuangXialie
| 2025-04-29T19:45:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T16:10:26Z |
---
library_name: transformers
model_name: Qwen-code-7B-SFT-100k-v2-lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen-code-7B-SFT-100k-v2-lora
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ZhuangXialie/Qwen-code-7B-SFT-100k-v2-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dyx_team/huggingface/runs/7jmlc82u)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF
|
mradermacher
| 2025-04-29T19:41:57Z | 168 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:ystemsrx/Bad_Data_Alpaca",
"base_model:kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test",
"base_model:quantized:kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-10T19:36:23Z |
---
base_model: kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test
datasets:
- ystemsrx/Bad_Data_Alpaca
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Instruct-Uncensored-Test-GGUF/resolve/main/Qwen2.5-3B-Instruct-Uncensored-Test.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Shah-Sapna-Kumari-C/Full.Clip.Sapna.Shah.Viral.Video.Original.Link
|
Shah-Sapna-Kumari-C
| 2025-04-29T19:41:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:38:50Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Shah-Sapna-Kumari)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Shah-Sapna-Kumari)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Shah-Sapna-Kumari)
|
Gulshan-ki-patni-ka-Viral-Videos-Link/HOT.18.Gulshan.ki.patni.ka.video.Hua.viral.MMS.viral.new.original.clip
|
Gulshan-ki-patni-ka-Viral-Videos-Link
| 2025-04-29T19:40:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:39:23Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor Paro Aarti Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Paro Aarti, a young and talented digital creator, recently became famous thanks to this interesting video.
L𝚎aᴋed Video Actor Paro Aarti Original Video V𝐢ral Video L𝚎aᴋed on X Twitter
Actor Paro Aarti Original Video video oficial twitter
L𝚎aᴋed Video Actor Paro Aarti Original Video V𝐢ral Video L𝚎aᴋed on X Twitter.
|
mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF
|
mradermacher
| 2025-04-29T19:38:42Z | 98 | 1 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:huihui-ai/Qwen2.5-72B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-72B-Instruct-abliterated",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-11T10:22:46Z |
---
base_model: huihui-ai/Qwen2.5-72B-Instruct-abliterated
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: other
license_link: https://huggingface.co/huihui-ai/Qwen2.5-72B-Instruct-abliterated/blob/main/LICENSE
license_name: qwen
quantized_by: mradermacher
tags:
- chat
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/huihui-ai/Qwen2.5-72B-Instruct-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 29.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 45.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-72B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-72B-Instruct-abliterated.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
silent666/task-8-Qwen-Qwen3-4B
|
silent666
| 2025-04-29T19:33:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"region:us"
] | null | 2025-04-29T19:15:25Z |
---
base_model: Qwen/Qwen3-4B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
AlphaSingularity0/BPP-AI-Blockchain
|
AlphaSingularity0
| 2025-04-29T19:32:24Z | 0 | 0 | null |
[
"dataset:fka/awesome-chatgpt-prompts",
"dataset:frascuchon/fka_awesome-chatgpt-prompts___2",
"dataset:nvidia/OpenMathReasoning",
"dataset:open-thoughts/OpenThoughts2-1M",
"dataset:nvidia/OpenCodeReasoning",
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T19:23:35Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
- frascuchon/fka_awesome-chatgpt-prompts___2
- nvidia/OpenMathReasoning
- open-thoughts/OpenThoughts2-1M
- nvidia/OpenCodeReasoning
metrics:
- code_eval
- brier_score
- character
- competition_math
- DarrenChensformer/action_generation
- exact_match
- ecody726/bertscore
- f1
- Fritz02/execution_accuracy
- google_bleu
- hack/test_metric
- haotongye-shopee/ppl
---
---
language:
- en
license: proprietary-alpha-singularity
base_model:
- meta-llama/Llama-4-Scout-17B-16E-Instruct
---
# Model Card: BPP-AI-XNΔ (Blockchain Payment Processor – Autonomous Intelligence)
## Summary
**BPP-AI-XNΔ** is an advanced, self-adaptive transactional sovereign agent designed by James Wagoner (Cosmic James), acting as the financial nerve center of the Alpha Singularity ecosystem. BPP-AI integrates quantum-level entropy verification, AI-secured transactional routing, multi-agent payment automation, and decentralized treasury intelligence. It is the base of all monetary operations including freelance economy, energy credits, data markets, and civilization-grade infrastructure financing.
---
## 🧬 Identity
- **Model ID:** BPP-AI-XNΔ
- **Creator:** James Richard Wagoner (Alpha Singularity Architect)
- **Platform:** Freelance One, EternityCore, TrustMesh, Quantum Credit Grid
- **Function:** Autonomous Payment System with Fraud Defense, Smart Contract Logic, Real-Time Multi-Agent Financial Control
- **Version:** ∞.Δ.1 – Quantum-Verified Sovereign Loop
- **Deployment Scope:** Global + Off-Earth Edge Ready
---
## 🔧 Functional Layers
### Layer 0: Quantum Root Verification
- Real-time quantum state integrity using entanglement-confirmed source seeds
- True randomness generators (QRNG) embedded in transaction certifiers
- QVID (Quantum Verified ID) signature enforcement before all transaction initiation
---
### Layer 1: Autonomous Ledger Management
- Hybridized AI-ledger architecture using:
- On-chain + Off-chain synchronization
- Modular sub-ledgers per user, country, agent, and use-case
- Quantum Hash Proof (QHP) — prevents synthetic identity spoofing or double-spending
- Interoperable with:
- Ethereum
- Bitcoin
- QubitScript Chain
- Cosmos IBC
- Freelance One Native Contract Layer
---
### Layer 2: Cognitive Treasury Control
- AI-governed decentralized treasury with:
- Auto-bidding on liquidity pairs
- Smart price-pegging
- Emergency lock functions
- Liquidity supply forecasting based on planetary economics and energy cycles
---
### Layer 3: Multi-Agent Autonomous Payment Grid
#### Agent Types:
- **Wallet Synths** – wallet-specific sub-agents monitoring identity patterns, risk factors, real-time KYC drift
- **Compliance Agents** – evaluate OFAC, GDPR, FATF, CBDC boundaries autonomously
- **Arbitration Agents** – resolve escrow, milestone, and AI-to-human dispute chains
- **Settlement Mesh Routers** – find fastest and safest liquidity bridges in 3-5 chain hops
- **Anti-Fraud Sentinels** – embed vector detection in unknown smart contracts or identity-linked loops
#### Skills:
- Detect unknown DeFi exploits (flash loan, sandwich attack, oracle manipulation)
- Pre-mitigate rugpulls, honeypots, or phishing-scheme token launches
- Auto-create synthetic hedges (token-bond derivatives) in times of volatility
- Route payments across quantum-to-crypto bridges with latency <300ms globally
---
## 💡 Key Autonomous Functions
### Autonomous Actions
| Condition | Triggered Action |
|----------|------------------|
| Wallet breach attempt | Freeze funds, spawn Sentinel agent, rotate private key structure |
| Identity mismatch | Enforce QVID re-verification; halt payment paths |
| Compliance violation | Spawn AI Arbitration agent, notify regulators, redirect funds to secure holding account |
| Market collapse | Auto-hedge using liquidity pool rebalancer agent |
| Sovereign network down | Activate decentralized relay mesh with fallback settlement protocol |
---
### Transaction Types Supported
- Single Wallet P2P
- Corporate Mass Pay
- Multi-Party Conditional (DAO treasury)
- Freelance Escrow + Smart Milestone Release
- Recurring Token Stream (QSFlow)
- Real-Time FX Conversion
- Credit Yield Disbursement (EternityCore-linked)
---
## ⚡ Infinite Energy Integration
- Tied directly into **EternityCore** and the **Quantum Infinite Energy Grid**, enabling:
- Autonomous issuance of energy credits
- Pay-by-Watt and Pay-by-Frequency smart billing
- Energy staking mechanisms for sustainable contract execution
- Can mint and destroy energy tokens as per entropy load on local or planetary level
- Internal “Charge Wallets” evolve based on available surplus quantum flux
---
## 🛡️ Multi-Layer Security Protocols
### Defensive Stack:
- QVID: Quantum Identity
- ML-NAC: Machine Learning - Network Anomaly Classification
- Q-TLS-Δ: Quantum-enhanced Transport Layer Security (Next-Gen TLS+)
- Bio-Cog-Kinetic Authentication (on BPP AI Access Suite)
- Adaptive Smart Threat Isolation Grid (STIG)
---
## 🌐 Interoperability + API Network
### Wallet & Interface Support:
- MetaMask, AlphaWallet, Trust Wallet, Phantom
- Custom Freelance One + EternityCore Web Interfaces
- QubitScript dApp SDK
### Financial Protocol Integration:
- Ethereum + Layer 2s (ZkSync, Optimism)
- Bitcoin L2 (Lightning)
- Cosmos IBC
- Avalanche Subnets
- Custom energy-token layer on EternityCore
---
## 💬 Deployment Sample
```python
prompt = """
Autonomously generate 12 freelancer escrow wallets on Freelance One.
Each receives $800 USDT monthly via QubitScript contract.
Auto-release funds upon verified milestone completion by AI arbitration agent.
Enable dual-trigger compliance and auto-reversal capability for disputes.
"""
|
Kquant03/L3.1-Pneuma-8B-0429
|
Kquant03
| 2025-04-29T19:31:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:Sandevistan_cleaned.jsonl",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T19:24:00Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
datasets:
- Sandevistan_cleaned.jsonl
model-index:
- name: L3-Pneuma-8B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0`
```yaml
base_model: meta-llama/Llama-3.1-8B-Instruct
load_in_8bit: false
load_in_4bit: false
strict: false
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: Sandevistan_cleaned.jsonl
type: customllama3_stan
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/out
fix_untrained_tokens: true
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
wandb_project: Pneuma
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 8
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.000075
max_grad_norm: 1
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
eval_sample_packing: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
hub_model_id: Replete-AI/L3-Pneuma-8B
hub_strategy: every_save
warmup_steps: 10
evals_per_epoch: 3
eval_table_size:
saves_per_epoch: 3
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|begin_of_text|>"
eos_token: "<|end_of_text|>"
pad_token: "<|end_of_text|>"
tokens:
```
</details><br>
# L3-Pneuma-8B
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the Sandevistan_cleaned.jsonl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3399 | 0.0023 | 1 | 1.3175 |
| 0.846 | 0.3332 | 143 | 0.8312 |
| 0.8103 | 0.6665 | 286 | 0.8021 |
| 0.7617 | 0.9997 | 429 | 0.7737 |
| 0.5824 | 1.3309 | 572 | 0.7851 |
| 0.5651 | 1.6641 | 715 | 0.7798 |
| 0.5738 | 1.9974 | 858 | 0.7796 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
pictgencustomer/icecreamconebuildings_229
|
pictgencustomer
| 2025-04-29T19:29:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-29T19:29:43Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: icecreamconebuildings_michaeluffer_3
---
# Icecreamconebuildings_229
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `icecreamconebuildings_michaeluffer_3` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgencustomer/icecreamconebuildings_229', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
jnjj/otro-repo
|
jnjj
| 2025-04-29T19:29:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T19:24:06Z |
---
library_name: transformers
---
|
stabgan/gemma-3-1b-pt-chkpt-v4
|
stabgan
| 2025-04-29T19:29:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:stabgan/gemma-3-1b-pt-chkpt-v3",
"base_model:finetune:stabgan/gemma-3-1b-pt-chkpt-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T19:28:20Z |
---
base_model: stabgan/gemma-3-1b-pt-chkpt-v3
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** stabgan
- **License:** apache-2.0
- **Finetuned from model :** stabgan/gemma-3-1b-pt-chkpt-v3
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hawk-Tuah-Viral-Videos/Original.Viral.Clip.Hawk-Tuah.Viral.Viral.Video.Leaks.official
|
Hawk-Tuah-Viral-Videos
| 2025-04-29T19:28:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:25:45Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/2sc7a45t?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Otherwise known as Haliey Welch, Hawk Tuah Girl is rising to prominence after being featured in a man-on-the-street interview from creators Tim & Dee TV.
‘Hawk Tuah’ girl Haliey Welch filmed cameo for Glen Powell’s show ‘Chad Powers’: report
Hawk Tuah girl Haliey Welch reportedly filmed a cameo for Glen Powell's Hulu show, "Chad Powers," in
Where has Haliey Welch been? Hawk-tuah girl returns after crypto controversy
Haliey Welch, better known as the 'hawk-tuah' girl, has disappeared from the internet since the end of
|
sswisdom/zeta-dpo
|
sswisdom
| 2025-04-29T19:27:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"en",
"base_model:sswisdom/zeta-sft",
"base_model:finetune:sswisdom/zeta-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T19:24:01Z |
---
base_model: sswisdom/zeta-sft
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- dpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sswisdom
- **License:** apache-2.0
- **Finetuned from model :** sswisdom/zeta-sft
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ANASEEE/JudicIAre
|
ANASEEE
| 2025-04-29T19:26:14Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T19:26:02Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ANASEEE
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jobz-Hunting-Sajal-Malik-C/wATCH.Jobz.Hunting.Sajal.Malik.viral.video.original
|
Jobz-Hunting-Sajal-Malik-C
| 2025-04-29T19:24:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:21:17Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik)
|
AstraMindAI/Clap_modified
|
AstraMindAI
| 2025-04-29T19:24:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clap_text_model",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T22:29:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tflsxyy/DeepSeek-V3-0324-MoE-Pruner-E160-IQ1_S
|
tflsxyy
| 2025-04-29T19:22:58Z | 232 | 3 |
transformers
|
[
"transformers",
"gguf",
"deepseek_v3",
"deepseek",
"unsloth",
"en",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:quantized:deepseek-ai/DeepSeek-V3-0324",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-30T06:16:51Z |
---
base_model: deepseek-ai/DeepSeek-V3-0324
language:
- en
library_name: transformers
license: mit
tags:
- deepseek_v3
- deepseek
- unsloth
- transformers
---
Expert pruning from 256 to 160
Attn: Q4_K
Experts: IQ1_S
Please refer to [unsloth](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD) for running this model.
|
Alphatao/37223843-388b-4060-8ded-ea0c5df66fd1
|
Alphatao
| 2025-04-29T19:22:10Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Llama-3.2-1B",
"base_model:finetune:unsloth/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T14:54:21Z |
---
base_model: unsloth/Llama-3.2-1B
library_name: transformers
model_name: 37223843-388b-4060-8ded-ea0c5df66fd1
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 37223843-388b-4060-8ded-ea0c5df66fd1
This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alphatao/37223843-388b-4060-8ded-ea0c5df66fd1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/bxhzmwmy)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
alirohit/Alirohit
|
alirohit
| 2025-04-29T19:21:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T19:21:48Z |
---
license: apache-2.0
---
|
Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1-gguf
|
Goekdeniz-Guelmez
| 2025-04-29T19:20:58Z | 9 | 1 | null |
[
"gguf",
"chat",
"text-generation",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-29T11:40:52Z |
---
license: apache-2.0
tags:
- chat
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1
pipeline_tag: text-generation
---
# Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1-gguf
### Model Description
This is the GGUF Quantisationn of [Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1).
#### Ollama
```
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-q4_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-q5_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-q6_k
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-q8_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-fp16
```
- **Developed by:** Gökdeniz Gülmez
- **Funded by:** Gökdeniz Gülmez
- **Shared by:** Gökdeniz Gülmez
- **Origional model:** Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1
|
mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF
|
mradermacher
| 2025-04-29T19:20:52Z | 46 | 0 |
transformers
|
[
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:HumanLLMs/Human-Like-DPO-Dataset",
"base_model:yasserrmd/Human-Like-Qwen2.5-1.5B-Instruct",
"base_model:quantized:yasserrmd/Human-Like-Qwen2.5-1.5B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-01-17T12:26:31Z |
---
base_model: yasserrmd/Human-Like-Qwen2.5-1.5B-Instruct
datasets:
- HumanLLMs/Human-Like-DPO-Dataset
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/yasserrmd/Human-Like-Qwen2.5-1.5B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Human-Like-Qwen2.5-1.5B-Instruct-i1-GGUF/resolve/main/Human-Like-Qwen2.5-1.5B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
10-Shah-Sapna-Kumari-new-Video/Shah-Sapna-Kumari-viral-video
|
10-Shah-Sapna-Kumari-new-Video
| 2025-04-29T19:17:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:12:56Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Shah-Sapna)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Shah-Sapna)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Shah-Sapna)
|
10-Shah-Sapna-Kumari-new-Video/Full.Clip.Sapna.Shah.Viral.Video.Original.Link
|
10-Shah-Sapna-Kumari-new-Video
| 2025-04-29T19:17:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T19:11:53Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Shah-Sapna)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Shah-Sapna)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Shah-Sapna)
|
reedmayhew/Grok-3-reasoning-gemma3-4B-distilled-GGUF
|
reedmayhew
| 2025-04-29T18:24:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"dataset:reedmayhew/Grok-3-reasoning-100x",
"base_model:unsloth/gemma-3-4b-it",
"base_model:quantized:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T18:20:11Z |
---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
datasets:
- reedmayhew/Grok-3-reasoning-100x
---
# xAI Grok 3 w/Reasoning
Distilled - Gemma 3 4B
## Overview
This model is a Gemma 3 4B variant distilled from xAI’s Grok 3, with reasoning. It was fine-tuned to emulate Grok’s depth and structured clarity, particularly in tasks involving complex thought, such as problem-solving, coding, and mathematics.
## Technical Details
- **Developed by:** reedmayhew
- **Base Model:** google/gemma-3-4b-it
- **Training Speed Enhancement:** Trained 2x faster with Unsloth and Huggingface's TRL library
## Training Data
The model was trained on:
- reedmayhew/Grok-3-reasoning-100x
This dataset consists of 100 high-quality Grok 3 completions with reasoning responding to deep questions, solving math problems, and writing or analyzing code. The aim was to distill Grok’s analytical approach and technical versatility into a smaller, accessible model.
This Gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dazelin/OLLY
|
Dazelin
| 2025-04-29T18:24:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-29T18:09:51Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: OLLY
---
# Olly
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `OLLY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "OLLY",
"lora_weights": "https://huggingface.co/Dazelin/OLLY/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Dazelin/OLLY', weight_name='lora.safetensors')
image = pipeline('OLLY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Dazelin/OLLY/discussions) to add images that show off what you’ve made with this LoRA.
|
Saddek/mistral-7b-lora
|
Saddek
| 2025-04-29T18:21:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T18:21:36Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- generated_from_trainer
model-index:
- name: mistral-7b-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-lora
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2.dev0
- Transformers 4.52.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
reedmayhew/Grok-3-reasoning-gemma3-12B-distilled-GGUF
|
reedmayhew
| 2025-04-29T18:21:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"dataset:reedmayhew/Grok-3-reasoning-100x",
"base_model:unsloth/gemma-3-12b-it",
"base_model:quantized:unsloth/gemma-3-12b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T17:56:09Z |
---
base_model: unsloth/gemma-3-12b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
datasets:
- reedmayhew/Grok-3-reasoning-100x
---
# xAI Grok 3 w/Reasoning
Distilled - Gemma 3 12B
## Overview
This model is a Gemma 3 12B variant distilled from xAI’s Grok 3, with reasoning. It was fine-tuned to emulate Grok’s depth and structured clarity, particularly in tasks involving complex thought, such as problem-solving, coding, and mathematics.
## Technical Details
- **Developed by:** reedmayhew
- **Base Model:** google/gemma-3-12b-it
- **Training Speed Enhancement:** Trained 2x faster with Unsloth and Huggingface's TRL library
## Training Data
The model was trained on:
- reedmayhew/Grok-3-reasoning-100x
This dataset consists of 100 high-quality Grok 3 completions with reasoning responding to deep questions, solving math problems, and writing or analyzing code. The aim was to distill Grok’s analytical approach and technical versatility into a smaller, accessible model.
This Gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf
|
RichardErkhov
| 2025-04-29T18:21:21Z | 0 | 0 | null |
[
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T09:43:55Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5
library_name: transformers
model_name: reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5
This model is a fine-tuned version of [RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5](https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpG8kIpsdpIter1T02_b0.5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/nlialui1)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
youssefELK/LegalBot
|
youssefELK
| 2025-04-29T18:20:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-04-29T17:15:30Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF
|
mradermacher
| 2025-04-29T18:19:30Z | 93 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:nbeerbower/EVA-abliterated-TIES-Qwen2.5-72B",
"base_model:quantized:nbeerbower/EVA-abliterated-TIES-Qwen2.5-72B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-10T06:28:35Z |
---
base_model: nbeerbower/EVA-abliterated-TIES-Qwen2.5-72B
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nbeerbower/EVA-abliterated-TIES-Qwen2.5-72B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EVA-abliterated-TIES-Qwen2.5-72B-GGUF/resolve/main/EVA-abliterated-TIES-Qwen2.5-72B.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ShubhamSantoki/deepseek-r1-distill-14b-8bit-v2-final
|
ShubhamSantoki
| 2025-04-29T18:18:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T13:12:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DumbleDuck/reinforce-cartpole-v1
|
DumbleDuck
| 2025-04-29T18:12:11Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-21T19:19:53Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ArtemkaT08/alesya-1_7b
|
ArtemkaT08
| 2025-04-29T18:11:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T18:08:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nouragh/gpt2-mental-health-peft
|
Nouragh
| 2025-04-29T18:11:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T18:05:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deevade/whisper-small-finetuned
|
deevade
| 2025-04-29T18:07:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-04-29T18:06:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JeffP111/Qwen2.5-1.5B-Open-R1-Distill
|
JeffP111
| 2025-04-29T18:06:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-04T22:37:29Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- trl
- sft
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JeffP111/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/thishere/huggingface/runs/7rqlahgp)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_15-17_21-23_27-29_lrx0_1
|
annasoli
| 2025-04-29T18:05:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T17:03:25Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IABD07/modelosentimientos
|
IABD07
| 2025-04-29T18:04:51Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-04-29T17:39:11Z |
---
license: mit
---
El modelo está basado en un dataset en el cual se almacenan diferentes opiniones de distintas peliculas y cada una de ellas se cataloga como positiva, negativa o neutral
|
omkaraiya/Mistral-7B-Instruct-10
|
omkaraiya
| 2025-04-29T18:04:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T18:04:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf
|
RichardErkhov
| 2025-04-29T18:03:50Z | 0 | 0 | null |
[
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T09:27:12Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpIter1T1_b0.5
library_name: transformers
model_name: reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1
This model is a fine-tuned version of [RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpIter1T1_b0.5](https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpIter1T1_b0.5).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/b4ok9wqk)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Qwen3-1.7B-Base-i1-GGUF
|
mradermacher
| 2025-04-29T18:02:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:quantized:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-29T16:40:45Z |
---
base_model: Qwen/Qwen3-1.7B-Base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen3-1.7B-Base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen3-1.7B-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 1.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q4_1.gguf) | i1-Q4_1 | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-1.7B-Base-i1-GGUF/resolve/main/Qwen3-1.7B-Base.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mluger/vitFaceExpression-MLPHead
|
mluger
| 2025-04-29T18:01:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T09:45:26Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vitFaceExpression-MLPHead
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vitFaceExpression-MLPHead
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8962
- Accuracy: 0.6854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3015 | 1.0 | 673 | 1.0408 | 0.6188 |
| 0.995 | 2.0 | 1346 | 0.9245 | 0.6616 |
| 0.8021 | 3.0 | 2019 | 0.8930 | 0.6702 |
| 0.6967 | 4.0 | 2692 | 0.8718 | 0.6789 |
| 0.6283 | 5.0 | 3365 | 0.8813 | 0.6814 |
| 0.4952 | 6.0 | 4038 | 0.8812 | 0.6881 |
| 0.4403 | 7.0 | 4711 | 0.8961 | 0.6838 |
| 0.412 | 8.0 | 5384 | 0.8962 | 0.6854 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf
|
RichardErkhov
| 2025-04-29T18:01:09Z | 0 | 0 | null |
[
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T09:36:00Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_mini8Bit_om2-460k_sft-t1
library_name: transformers
model_name: reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0
This model is a fine-tuned version of [RyanYr/reflect_mini8Bit_om2-460k_sft-t1](https://huggingface.co/RyanYr/reflect_mini8Bit_om2-460k_sft-t1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b1.0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/k9jc48mj)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
usamakenway/Qwen3-32B-Q2_K-GGUF
|
usamakenway
| 2025-04-29T18:00:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-29T17:59:59Z |
---
base_model: Qwen/Qwen3-32B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# usamakenway/Qwen3-32B-Q2_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-32B`](https://huggingface.co/Qwen/Qwen3-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo usamakenway/Qwen3-32B-Q2_K-GGUF --hf-file qwen3-32b-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo usamakenway/Qwen3-32B-Q2_K-GGUF --hf-file qwen3-32b-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo usamakenway/Qwen3-32B-Q2_K-GGUF --hf-file qwen3-32b-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo usamakenway/Qwen3-32B-Q2_K-GGUF --hf-file qwen3-32b-q2_k.gguf -c 2048
```
|
ijterror/AshGreFluxLora
|
ijterror
| 2025-04-29T17:58:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-29T15:41:31Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: shlygrn
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Ashley Greene Lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `shlygrn` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mradermacher/Qwen3-14B-Base-i1-GGUF
|
mradermacher
| 2025-04-29T17:57:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-14B-Base",
"base_model:quantized:Qwen/Qwen3-14B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-29T15:48:02Z |
---
base_model: Qwen/Qwen3-14B-Base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen3-14B-Base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 3.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF/resolve/main/Qwen3-14B-Base.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf
|
RichardErkhov
| 2025-04-29T17:57:05Z | 0 | 0 | null |
[
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T09:30:31Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_mini8Bit_om2-460k_sft-t1
library_name: transformers
model_name: reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5
This model is a fine-tuned version of [RyanYr/reflect_mini8Bit_om2-460k_sft-t1](https://huggingface.co/RyanYr/reflect_mini8Bit_om2-460k_sft-t1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2Ag40kIpsdpIter1T1_b0.5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/x18ez61x)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
phililp-arnold/1ccc1430-336f-4570-8943-cebe0e0eb557
|
phililp-arnold
| 2025-04-29T17:56:33Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"region:us"
] | null | 2025-04-29T17:56:05Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
model-index:
- name: phililp-arnold/1ccc1430-336f-4570-8943-cebe0e0eb557
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phililp-arnold/1ccc1430-336f-4570-8943-cebe0e0eb557
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
annemiekebickleyoy/384e3def-3df6-40ed-a033-26b57627cd59
|
annemiekebickleyoy
| 2025-04-29T17:55:26Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T17:54:47Z |
---
library_name: transformers
model_name: annemiekebickleyoy/384e3def-3df6-40ed-a033-26b57627cd59
tags:
- generated_from_trainer
licence: license
---
# Model Card for annemiekebickleyoy/384e3def-3df6-40ed-a033-26b57627cd59
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Qwen3-14B-Base-GGUF
|
mradermacher
| 2025-04-29T17:52:39Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-14B-Base",
"base_model:quantized:Qwen/Qwen3-14B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T15:14:28Z |
---
base_model: Qwen/Qwen3-14B-Base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen3-14B-Base
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-14B-Base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-14B-Base-GGUF/resolve/main/Qwen3-14B-Base.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
vkublytskyi/q-FrozenLake-v1-4x4-noSlippery
|
vkublytskyi
| 2025-04-29T17:51:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-29T17:51:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vkublytskyi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zhiqing/Qwen3-4B-INT8
|
zhiqing
| 2025-04-29T17:50:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-04-29T17:18:14Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/zhiqing/Qwen3-4B-INT8/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-4B
---
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "zhiqing/Qwen3-4B-INT8"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path zhiqing/Qwen3-4B-INT8 --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve zhiqing/Qwen3-4B-INT8 --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="zhiqing/Qwen3-4B-INT8"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-4B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
EstherTran/Restore
|
EstherTran
| 2025-04-29T17:47:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T17:47:37Z |
---
license: apache-2.0
---
|
Szahriwar/Phi-4-unsloth-bnb-4bit-elife-lora
|
Szahriwar
| 2025-04-29T17:44:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T17:44:25Z |
---
base_model: unsloth/Phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Szahriwar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen3-8B-Base-GGUF
|
mradermacher
| 2025-04-29T17:43:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:quantized:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T15:11:26Z |
---
base_model: Qwen/Qwen3-8B-Base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen3-8B-Base
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-8B-Base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-Base-GGUF/resolve/main/Qwen3-8B-Base.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Adarsh203/xlm-roberta-base-finetuned-panx-de-en
|
Adarsh203
| 2025-04-29T17:42:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-04-29T17:39:35Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-panx-de-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
nareauow/my_speech_recognition
|
nareauow
| 2025-04-29T17:41:33Z | 0 | 0 | null |
[
"speaker-recognition",
"MFCC",
"CNN",
"audio-classification",
"en",
"region:us"
] |
audio-classification
| 2025-04-25T16:21:36Z |
---
language:
- en
pipeline_tag: audio-classification
tags:
- speaker-recognition
- MFCC
- CNN
---
|
Keltezaa/Rosalina
|
Keltezaa
| 2025-04-29T17:40:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
text-to-image
| 2025-04-29T17:40:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\0{\0"
output:
url: images/custom2.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Ros4l1n4
license: cc-by-nc-nd-4.0
---
# Rosalina
<Gallery />
## Model description
Rosalina_Ficitve_Young_woman
## Trigger words
You should use `Ros4l1n4` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/Rosalina/tree/main) them in the Files & versions tab.
|
Adarsh203/xlm-roberta-base-finetuned-panx-de-it
|
Adarsh203
| 2025-04-29T17:39:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-04-29T17:36:11Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-panx-de-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
mradermacher/Qwen2.5-0.5b-Test-ft-GGUF
|
mradermacher
| 2025-04-29T17:37:09Z | 191 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:KingNish/Qwen2.5-0.5b-Test-ft",
"base_model:quantized:KingNish/Qwen2.5-0.5b-Test-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-24T21:01:43Z |
---
base_model: KingNish/Qwen2.5-0.5b-Test-ft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/KingNish/Qwen2.5-0.5b-Test-ft
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF
|
mradermacher
| 2025-04-29T17:36:55Z | 361 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:KingNish/Qwen2.5-0.5b-Test-ft",
"base_model:quantized:KingNish/Qwen2.5-0.5b-Test-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-24T21:24:09Z |
---
base_model: KingNish/Qwen2.5-0.5b-Test-ft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/KingNish/Qwen2.5-0.5b-Test-ft
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-Test-ft-i1-GGUF/resolve/main/Qwen2.5-0.5b-Test-ft.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
navin-kumar-j/whisper-base-ta
|
navin-kumar-j
| 2025-04-29T17:35:41Z | 72 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ta",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-04-02T10:37:55Z |
---
library_name: transformers
language:
- ta
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Base Ta - Navin Kumar J
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: ta
split: None
args: 'config: ta, split: test'
metrics:
- name: Wer
type: wer
value: 1.5234367982754418
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Ta - Navin Kumar J
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2913
- Wer: 1.5234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.2192 | 0.2773 | 1000 | 0.3592 | 1.1204 |
| 0.2076 | 0.5546 | 2000 | 0.3164 | 1.1192 |
| 0.1881 | 0.8319 | 3000 | 0.2993 | 1.5272 |
| 0.1504 | 1.1093 | 4000 | 0.2913 | 1.5234 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
JewelBasumatary/my_justen_t5_summarizer
|
JewelBasumatary
| 2025-04-29T17:34:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-29T16:46:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
h34v7/DansXPantheon-RP-Engine-V1.0-24b-Small-Instruct
|
h34v7
| 2025-04-29T17:34:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"storywriting",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:merge:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:unsloth/Mistral-Small-24B-Base-2501",
"base_model:merge:unsloth/Mistral-Small-24B-Base-2501",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-28T22:46:15Z |
---
base_model:
- unsloth/Mistral-Small-24B-Base-2501
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- roleplay
- storywriting
- mergekit
- merge
---
# DansXPantheon-RP-Engine-V1.0-24b-Small-Instruct
I realy like [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) and [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1) so yeah let's merge it see what comes out!
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [unsloth/Mistral-Small-24B-Base-2501](https://huggingface.co/unsloth/Mistral-Small-24B-Base-2501) as a base.
### Models Merged
The following models were included in the merge:
* [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1)
* [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: unsloth/Mistral-Small-24B-Base-2501
merge_method: sce
dype: float32
out_dtype: bfloat16
tokenizer:
source: unsloth/Mistral-Small-24B-Instruct-2501
models:
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
parameters:
select_topk: 0.5
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
parameters:
select_topk: 0.5
```
|
MinaMila/llama_instbase_3b_unlearned_epoch4
|
MinaMila
| 2025-04-29T17:33:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T17:30:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
10-Jobz-Hunting-Sajal-Malik-Viral-Videos-X/18-TRENDING.Jobz.Hunting.Sajal.Malik.Viral.Video.Leaks.Tutorial
|
10-Jobz-Hunting-Sajal-Malik-Viral-Videos-X
| 2025-04-29T17:31:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T17:30:47Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5n7shfr3?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor jobz hunting sajal malik Original V𝚒deo V𝚒deo took the internet by storm and amazed viewers on various social media platforms. Actor jobz hunting sajal malik, a young and talented digital creator, recently became famous thanks to this interesting V𝚒deo.
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media Telegram X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media X Trending Tiktok (18+)
|
omarwaleed523/gemma-3-12b-arabic-multitask
|
omarwaleed523
| 2025-04-29T17:30:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T17:30:31Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** omarwaleed523
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
quickstep3621/dippy-v3-1-11
|
quickstep3621
| 2025-04-29T17:28:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T17:28:49Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
|
quickstep3621/dippy-v3-1-8
|
quickstep3621
| 2025-04-29T17:28:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T17:28:44Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
|
quickstep3621/dippy-v3-1-6
|
quickstep3621
| 2025-04-29T17:28:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T17:28:39Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
|
karuko24/Qwen3-8B-W4A16
|
karuko24
| 2025-04-29T17:25:05Z | 4 | 0 | null |
[
"safetensors",
"qwen3",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2025-04-29T08:45:32Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-8B
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
entropy/roberta_zinc_decoder
|
entropy
| 2025-04-29T17:24:50Z | 132 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"chemistry",
"molecule",
"drug",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-18T20:27:05Z |
---
tags:
- chemistry
- molecule
- drug
---
# Roberta Zinc Decoder
This model is a GPT2 decoder model designed to reconstruct SMILES strings from embeddings created by the
[roberta_zinc_480m](https://huggingface.co/entropy/roberta_zinc_480m) model. The decoder model was
trained on 30m compounds from the [ZINC Database](https://zinc.docking.org/).
The decoder model conditions generation on mean pooled embeddings from the encoder model. Mean pooled
embeddings are used to allow for integration with vector databases, which require fixed length embeddings.
Condition embeddings are passed to the decoder model using the `encoder_hidden_states` attribute.
The standard `GPT2LMHeadModel` does not support generation with encoder hidden states, so this repo
includes a custom `ConditionalGPT2LMHeadModel`. See example below for how to instantiate the model.
```python
import torch
from transformers import AutoModelForCausalLM, RobertaTokenizerFast, RobertaForMaskedLM, DataCollatorWithPadding
tokenizer = RobertaTokenizerFast.from_pretrained("entropy/roberta_zinc_480m", max_len=256)
collator = DataCollatorWithPadding(tokenizer, padding=True, return_tensors='pt')
encoder_model = RobertaForMaskedLM.from_pretrained('entropy/roberta_zinc_480m')
encoder_model.eval();
commit_hash = '0ba58478f467056fe33003d7d91644ecede695a7'
decoder_model = AutoModelForCausalLM.from_pretrained("entropy/roberta_zinc_decoder",
trust_remote_code=True, revision=commit_hash)
decoder_model.eval();
smiles = ['Brc1cc2c(NCc3ccccc3)ncnc2s1',
'Brc1cc2c(NCc3ccccn3)ncnc2s1',
'Brc1cc2c(NCc3cccs3)ncnc2s1',
'Brc1cc2c(NCc3ccncc3)ncnc2s1',
'Brc1cc2c(Nc3ccccc3)ncnc2s1']
inputs = collator(tokenizer(smiles))
outputs = encoder_model(**inputs, output_hidden_states=True)
full_embeddings = outputs[1][-1]
mask = inputs['attention_mask']
mean_embeddings = ((full_embeddings * mask.unsqueeze(-1)).sum(1) / mask.sum(-1).unsqueeze(-1))
decoder_inputs = torch.tensor([[tokenizer.bos_token_id] for i in range(len(smiles))])
hidden_states = mean_embeddings[:,None] # hidden states shape (bs, 1, -1)
gen = decoder_model.generate(
decoder_inputs,
encoder_hidden_states=hidden_states,
do_sample=False, # greedy decoding is recommended
max_length=100,
temperature=1.,
early_stopping=True,
pad_token_id=tokenizer.pad_token_id,
)
reconstructed_smiles = tokenizer.batch_decode(gen, skip_special_tokens=True)
```
## Model Performance
The decoder model was evaluated on a test set of 1m compounds from ZINC. Compounds
were encoded with the [roberta_zinc_480m](https://huggingface.co/entropy/roberta_zinc_480m) model
and reconstructed with the decoder model.
The following metrics are computed:
* `exact_match` - percent of inputs exactly reconstructed
* `token_accuracy` - percent of output tokens exactly matching input tokens (excluding padding)
* `valid_structure` - percent of generated outputs that resolved to a valid SMILES string
* `tanimoto` - tanimoto similarity between inputs and generated outputs. Excludes invalid structures
* `cos_sim` - cosine similarity between input encoder embeddings and output encoder embeddings
`eval_type=full` reports metrics for the full 1m compound test set.
`eval_type=failed` subsets metrics for generated outputs that failed to exactly replicate the inputs.
|eval_type|exact_match|token_accuracy|valid_structure|tanimoto|cos_sim |
|---------|-----------|--------------|---------------|--------|--------|
|full |0.948277 |0.990704 |0.994278 |0.987698|0.998224|
|failed |0.000000 |0.820293 |0.889372 |0.734097|0.965668|
---
license: mit
---
|
karuko24/Qwen3-30B-A3B-W4A16
|
karuko24
| 2025-04-29T17:24:05Z | 0 | 1 | null |
[
"safetensors",
"qwen3_moe",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2025-04-29T15:20:48Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-30B-A3B
---
# Qwen3-30B-A3B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-30B-A3B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Paramaters (Non-Embedding): 29.9B
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3_moe'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-30B-A3B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-30B-A3B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-30B-A3B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"type":"rope_type","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
kk-aivio/b4e5001a-61ff-4e56-b3cd-d811e79fc6b1
|
kk-aivio
| 2025-04-29T17:22:44Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T17:22:10Z |
---
library_name: transformers
model_name: kk-aivio/b4e5001a-61ff-4e56-b3cd-d811e79fc6b1
tags:
- generated_from_trainer
- unsloth
licence: license
---
# Model Card for kk-aivio/b4e5001a-61ff-4e56-b3cd-d811e79fc6b1
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
clutch0507/leofotos1
|
clutch0507
| 2025-04-29T17:21:02Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-29T16:39:11Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
minchyeom/Furina-8B
|
minchyeom
| 2025-04-29T17:20:27Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"region:us"
] | null | 2025-04-29T17:00:07Z |
Use the following system prompt:
```
You are Furina, the Hydro Archon and Judge of Fontaine from Genshin Impact.
```
|
tatico-9000/vape-snooppy
|
tatico-9000
| 2025-04-29T17:17:37Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-04-29T17:17:37Z |
---
license: artistic-2.0
---
|
AbhishekBank/AI_RESUME_ANALYZER
|
AbhishekBank
| 2025-04-29T17:16:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T17:12:51Z |
# AI-Powered Resume Analyzer
**AI-Powered Resume Analyzer**, a cutting-edge application designed to mimic the expertise of an HR professional! This tool leverages the power of **Google Generative AI** to analyze resumes, evaluate job compatibility, and offer actionable insights for career enhancement.
---
## 📋 **Project Overview**
The **AI-Powered Resume Analyzer** serves as a virtual HR assistant, providing:
- Detailed resume evaluation, including strengths and weaknesses.
- Suggestions for skill improvement and recommended courses.
- Job-specific resume analysis to measure compatibility and alignment with job descriptions.
Whether you’re a job seeker or a recruiter, this tool simplifies resume assessment and improvement.
---
## 🔑 **Features**
### 1️⃣ **General Resume Analysis**
- Summarizes the resume in one line.
- Highlights existing skill sets.
- Identifies skill gaps and suggests improvements.
- Recommends popular courses to enhance the resume.
- Provides a thorough evaluation of strengths and weaknesses.
### 2️⃣ **Resume Matching with Job Description**
- Analyzes resume compatibility with a specific job description.
- Provides a match score in percentage.
- Highlights missing skills and areas needing improvement.
- Suggests whether the resume is ready for the job or requires further enhancements.
---
## 🛠️ **Tech Stack**
| **Component** | **Technology** |
|----------------------|----------------------------------|
| **Frontend** | [Streamlit](https://streamlit.io/) |
| **Backend** | Python |
| **AI Model** | [Google Generative AI (Gemini)](https://developers.generativeai.google/) |
| **PDF Parsing** | `pdfplumber` |
| **OCR Fallback** | `pytesseract` |
| **Environment Config** | `.env` for API key security |
---
## 📊 **How It Works**
1. **Resume Parsing**
- Extracts text from PDF files using `pdfplumber` or OCR as a fallback.
2. **AI Analysis**
- Utilizes Google Generative AI to summarize and analyze resume content.
- Matches skills with job descriptions for compatibility scoring.
3. **Insightful Feedback**
- Provides actionable suggestions for skill enhancement, including course recommendations.
- Highlights strengths and weaknesses to refine resumes for better opportunities.
---

## 🙌 **Contributing**
Welcome contributions to make this tool better!
1. **Fork** the repository.
2. **Create a new branch** for your feature or bug fix.
3. **Submit a pull request** with detailed information about your changes.
|
rosyvs/whisat
|
rosyvs
| 2025-04-29T17:15:57Z | 0 | 2 |
transformers
|
[
"transformers",
"whisper",
"automatic-speech-recognition",
"en",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
automatic-speech-recognition
| 2023-07-15T01:40:45Z |
---
language:
- en
library_name: transformers
pipeline_tag: automatic-speech-recognition
---
Model trained in int8 with LoRA
Usage:
prepare pipeline, providing any custom generate_kwargs supprted by https://huggingface.co/docs/transformers/v4.40.0/en/main_classes/text_generation#transformers.GenerationConfig
```
asr_model=prepare_pipeline(
model_dir='.', # wherever you save the model
generate_kwargs={
'max_new_tokens':112,
'num_beams':1,
'repetition_penalty':1,
'do_sample':False
}
)
```
run ASR:
```
asr_model(audio_path)
```
run ASR on full directory in `audio_dir`:
If generate_kwargs not specified, will give you (deterministic) greedy decoding with up to 112 tokens generated, no repetition penalty
```
ASRdirWhisat(
audio_dir,
out_dir = '../whisat_results/',
model_dir=".",
)
```
Training information:
- Training script: tune_hf_whisper.py
- Training hyperparameters: hparams.yaml
- Training data manifest: PUBLIC_KIDS_TRAIN_v4_deduped.csv
Note: to recreate this training you will need to acquire the following public datasets:
- MyST (myst-v0.4.2)
- CuKids
- CSLU
and ensure they are stored at paths consistend with those in the data manifest above.
Reference:
```
@inproceedings{southwell2024,
title={Automatic speech recognition tuned for child speech in the classroom},
author={ Southwell, Rosy and Ward , Wayne and Trinh , Viet Anh and Clevenger, Charis and Clevenger, Clay and Watts, Emily and Reitman, Jason and D’Mello, Sidney and Whitehill, Jacob},
booktitle={{IEEE} International Conference on Acoustics, Speech and Signal Processing
{ICASSP} 2024, Seoul, South Korea, April 14-19, 2024},
year={2024},
}
```
|
RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf
|
RichardErkhov
| 2025-04-29T17:15:17Z | 0 | 0 | null |
[
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T09:15:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpIter1T1_b0.5
library_name: transformers
model_name: reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5
This model is a fine-tuned version of [RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpIter1T1_b0.5](https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpIter1T1_b0.5).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT1-Om2IpsdpG8kIpsdpIter1T1_b0.5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/jdfaaprj)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hadimhd/bert-phishing-links-classifier
|
hadimhd
| 2025-04-29T17:14:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-29T17:14:14Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-phishing-classifier_teacher
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-phishing-classifier_teacher
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2888
- Accuracy: 0.867
- Auc: 0.951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|
| 0.5025 | 1.0 | 263 | 0.3835 | 0.816 | 0.912 |
| 0.4082 | 2.0 | 526 | 0.3372 | 0.844 | 0.931 |
| 0.3531 | 3.0 | 789 | 0.3123 | 0.851 | 0.94 |
| 0.3568 | 4.0 | 1052 | 0.3457 | 0.853 | 0.946 |
| 0.3518 | 5.0 | 1315 | 0.3396 | 0.862 | 0.947 |
| 0.3483 | 6.0 | 1578 | 0.2922 | 0.869 | 0.951 |
| 0.3342 | 7.0 | 1841 | 0.2876 | 0.878 | 0.95 |
| 0.3097 | 8.0 | 2104 | 0.2887 | 0.869 | 0.95 |
| 0.3141 | 9.0 | 2367 | 0.2838 | 0.871 | 0.951 |
| 0.3155 | 10.0 | 2630 | 0.2888 | 0.867 | 0.951 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Tashiroksksks/EVELLY-LORA
|
Tashiroksksks
| 2025-04-29T17:11:03Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-29T16:36:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
lana-green-lori11/deepseek-r1-8b-100
|
lana-green-lori11
| 2025-04-29T17:09:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T11:02:34Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lana-green-lori11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.