modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-07 15:50:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 491
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-07 15:48:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ubergarm/GLM-4.5-Air-GGUF
|
ubergarm
| 2025-08-07T12:25:38Z | 5,259 | 15 | null |
[
"gguf",
"imatrix",
"conversational",
"ik_llama.cpp",
"text-generation",
"base_model:zai-org/GLM-4.5-Air",
"base_model:quantized:zai-org/GLM-4.5-Air",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-02T06:05:33Z |
---
quantized_by: ubergarm
pipeline_tag: text-generation
base_model: zai-org/GLM-4.5-Air
license: mit
base_model_relation: quantized
tags:
- imatrix
- conversational
- ik_llama.cpp
---
## `ik_llama.cpp` imatrix Quantizations of zai-org/GLM-4.5-Air
This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
*NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for [Windows builds by Thireus here.](https://github.com/Thireus/ik_llama.cpp/releases) which have been CUDA 12.8.
These quants provide best in class perplexity for the given memory footprint.
## Big Thanks
Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models!
## Quant Collection
Perplexity computed against *wiki.test.raw*.

These first two are just test quants for baseline perplexity comparison:
* `BF16` 205.811 GiB (16.004 BPW)
- Final estimate: PPL = 4.5704 +/- 0.02796
* `Q8_0` 109.381 GiB (8.505 BPW)
- Final estimate: PPL = 4.5798 +/- 0.02804
## IQ5_K 77.704 GiB (6.042 BPW)
Final estimate: PPL = 4.5867 +/- 0.02806
<details>
<summary>👈 Secret Recipe</summary>
```bash
#!/usr/bin/env bash
custom="
# 47 Repeating Layers [0-46]
# Note: All ffn_down.* layers are not divisible by 256 so have limited quantization options.
# Attention
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0
# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0
# Shared Expert Layers [1-46]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [1-46]
blk\.(1)\.ffn_down_exps\.weight=q8_0
blk\.(1)\.ffn_(gate|up)_exps\.weight=q8_0
blk\..*\.ffn_down_exps\.weight=q6_0
blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k
# NextN MTP Layer [46]
blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq6_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N 0 -m 0 \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/imatrix-GLM-4.5-Air-BF16.dat \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-128x9.4B-BF16-00001-of-00005.gguf \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-IQ5_K.gguf \
IQ5_K \
192
```
</details>
## IQ5_KS 72.855 GiB (5.665 BPW)
Final estimate: PPL = 4.5948 +/- 0.02815
<details>
<summary>👈 Secret Recipe</summary>
```bash
#!/usr/bin/env bash
custom="
# 47 Repeating Layers [0-46]
# Note: All ffn_down.* layers are not divisible by 256 so have limited quantization options.
# Attention
blk\..*\.attn_q.*=iq5_ks
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=iq5_ks
# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=q6_0
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [1-46]
blk\..*\.ffn_down_shexp\.weight=q6_0
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [1-46]
blk\..*\.ffn_down_exps\.weight=q6_0
blk\..*\.ffn_(gate|up)_exps\.weight=iq5_ks
# NextN MTP Layer [46]
blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N 0 -m 0 \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/imatrix-GLM-4.5-Air-BF16.dat \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-128x9.4B-BF16-00001-of-00005.gguf \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-IQ5_KS.gguf \
IQ5_KS \
192
```
</details>
## IQ4_K 62.910 GiB (4.892 BPW)
Final estimate: PPL = 4.6273 +/- 0.02839
<details>
<summary>👈 Secret Recipe</summary>
```bash
#!/usr/bin/env bash
custom="
# 47 Repeating Layers [0-46]
# Note: All ffn_down.* layers are not divisible by 256 so have limited quantization options.
# Attention
blk\..*\.attn_q.*=iq5_ks
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=iq5_ks
# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=q6_0
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [1-46]
blk\..*\.ffn_down_shexp\.weight=q6_0
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [1-46]
blk\..*\.ffn_down_exps\.weight=q5_0
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_k
# NextN MTP Layer [46]
blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N 1 -m 1 \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/imatrix-GLM-4.5-Air-BF16.dat \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-128x9.4B-BF16-00001-of-00005.gguf \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-IQ4_K.gguf \
IQ4_K \
192
```
</details>
## IQ4_KSS 54.801 GiB (4.261 BPW)
Final estimate: PPL = 4.7056 +/- 0.02909
<details>
<summary>👈 Secret Recipe</summary>
```bash
#!/usr/bin/env bash
custom="
# 47 Repeating Layers [0-46]
# Note: All ffn_down.* layers are not divisible by 256 so have limited quantization options.
# Attention
blk\.(0|1)\.attn_q.*=q8_0
blk\.(0|1)\.attn_k.*=q8_0
blk\.(0|1)\.attn_v.*=q8_0
blk\.(0|1)\.attn_output.*=q8_0
blk\..*\.attn_q.*=iq5_ks
blk\..*\.attn_k.*=iq5_ks
blk\..*\.attn_v.*=iq5_ks
blk\..*\.attn_output.*=iq5_ks
# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=q6_0
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [1-46]
blk\..*\.ffn_down_shexp\.weight=q6_0
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [1-46]
#blk\.(1|46)\.ffn_down_exps\.weight=q8_0
#blk\.(1|46)\.ffn_(gate|up)_exps\.weight=q8_0
blk\..*\.ffn_down_exps\.weight=iq4_nl
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss
# NextN MTP Layer [46]
blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N 0 -m 0 \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/imatrix-GLM-4.5-Air-BF16.dat \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-128x9.4B-BF16-00001-of-00005.gguf \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-IQ4_KSS.gguf \
IQ4_KSS \
192
```
</details>
## IQ2_KL 43.870 GiB (3.411 BPW)
Final estimate: PPL = 5.0697 +/- 0.03166
<details>
<summary>👈 Secret Recipe</summary>
```bash
#!/usr/bin/env bash
custom="
# 47 Repeating Layers [0-46]
# Note: All ffn_down.* layers are not divisible by 256 so have limited quantization options.
# Attention
blk\..*\.attn_q.*=iq4_ks
blk\..*\.attn_k.*=iq5_ks
blk\..*\.attn_v.*=iq5_ks
blk\..*\.attn_output.*=iq4_ks
# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=iq4_nl
blk\..*\.ffn_(gate|up)\.weight=iq4_kss
# Shared Expert Layers [1-46]
blk\..*\.ffn_down_shexp\.weight=iq4_nl
blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_kss
# Routed Experts Layers [1-46]
blk\.(1)\.ffn_down_exps\.weight=iq4_nl
blk\.(1)\.ffn_(gate|up)_exps\.weight=iq4_kss
blk\..*\.ffn_down_exps\.weight=iq4_nl
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl
# NextN MTP Layer [46]
blk\..*\.nextn\.embed_tokens\.weight=iq4_ks
blk\..*\.nextn\.shared_head_head\.weight=iq4_ks
blk\..*\.nextn\.eh_proj\.weight=q6_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N 0 -m 0 \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/imatrix-GLM-4.5-Air-BF16.dat \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-128x9.4B-BF16-00001-of-00005.gguf \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-IQ2_KL.gguf \
IQ2_KL \
192
```
</details>
## IQ1_KT 36.039 GiB (2.802 BPW)
Final estimate: PPL = 5.8214 +/- 0.03767
<details>
<summary>👈 Secret Recipe</summary>
```bash
#!/usr/bin/env bash
custom="
# 47 Repeating Layers [0-46]
# Note: All ffn_down.* layers are not divisible by 256 so have limited quantization options.
# Attention
blk\..*\.attn_q.*=iq4_kt
blk\..*\.attn_k.*=iq4_kt
blk\..*\.attn_v.*=iq4_kt
blk\..*\.attn_output.*=iq4_kt
# First 1 Dense Layers [0]
blk\..*\.ffn_down\.weight=iq4_nl
blk\..*\.ffn_(gate|up)\.weight=iq4_kt
# Shared Expert Layers [1-46]
blk\..*\.ffn_down_shexp\.weight=iq4_nl
blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_kt
# Routed Experts Layers [1-46]
blk\..*\.ffn_down_exps\.weight=iq4_nl
blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt
# NextN MTP Layer [46]
blk\..*\.nextn\.embed_tokens\.weight=iq4_kt
blk\..*\.nextn\.shared_head_head\.weight=iq4_kt
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N 1 -m 1 \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/imatrix-GLM-4.5-Air-BF16.dat \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-128x9.4B-BF16-00001-of-00005.gguf \
/mnt/raid/models/ubergarm/GLM-4.5-Air-GGUF/GLM-4.5-Air-IQ1_KT.gguf \
IQ1_KT \
192
```
</details>
## Quick Start
If you want to disable thinking, add `/nothink` (correct, no underscore) at the *end* of your prompt.
```bash
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON -DGGML_BLAS=OFF -DGGML_SCHED_MAX_COPIES=1
$ cmake --build build --config Release -j $(nproc)
# Run API server
$ ./build/bin/llama-server \
--model GLM-4.5-Air-IQ4_KSS-00001-of-00002.gguf \
--alias ubergarm/GLM-4.5-Air-IQ4_KSS \
--chat-template chatglm4 \
--ctx-size 32768 \
-fa -fmoe \
-ctk q8_0 -ctv q8_0 \
-ub 4096 -b 4096 \
-ngl 99 \
-ot exps=CPU \
--parallel 1 \
--threads 8 \
--host 127.0.0.1 \
--port 8080 \
--no-mmap
```
## References
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
* [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
* [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
* [Mainline llama.cpp Draft PR14939](https://github.com/ggml-org/llama.cpp/pull/14939)
* [ik_llama.cpp GLM-4.5 MoE PR668](https://github.com/ikawrakow/ik_llama.cpp/pull/668)
|
lastmass/MedGemma-GRPO
|
lastmass
| 2025-08-07T09:40:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"vllm",
"conversational",
"en",
"base_model:unsloth/medgemma-4b-it",
"base_model:finetune:unsloth/medgemma-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-07T08:08:13Z |
---
base_model: unsloth/medgemma-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- vllm
license: apache-2.0
language:
- en
---
# MedGemma-GRPO
## Model Description
This model is a fine-tuned version of the MEDGemma model, adapted for tasks involving clinical case reasoning.
The goal of this project was to explore the effects of combining Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO) to guide the model's ability to follow a logical diagnostic path.
The model was trained on a synthetic dataset of medical case studies,
which, due to the nature of the large models used for generation, has a notable bias towards complex cardiovascular, thrombotic, and autoimmune diseases (e.g., Lupus, Pericarditis).
## Fine-tuning Procedure
The model was first adapted to the medical domain using Supervised Fine-Tuning (SFT) on a synthetic dataset. Subsequently,
Group Relative Policy Optimization (GRPO) was applied, utilizing a custom rule-based reward function.
This reward function was designed to guide the model towards accurate diagnoses and logically structured reasoning, while also penalizing overly verbose outputs.
# Using with Unsloth
```
from unsloth import FastModel
import torch
max_seq_length=1024
model, tokenizer = FastModel.from_pretrained(
model_name = "lastmass/MedGemma-GRPO",
max_seq_length = max_seq_length,
load_in_4bit = False,
load_in_8bit = False,
full_finetuning = False,
)
reasoning_start = "<start_working_out>"
reasoning_end = "<end_working_out>"
solution_start = "<SOLUTION>"
solution_end = "</SOLUTION>"
system_prompt = \
f"""You are given a problem.
Think about the problem and provide your working out.
Place it between {reasoning_start} and {reasoning_end}.
Then, provide your solution between {solution_start}{solution_end}"""
user_input=”“
messages = [{
"role": "user",
"content": [{"type" : "text", "text" :system_prompt + user_input + reasoning_start,}]
}]
text = tokenizer.apply_chat_template(
messages,
add_generation_prompt = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer([text], return_tensors = "pt").to("cuda"),
max_new_tokens =max_seq_length,
# Recommended Gemma-3 settings!
temperature = 0.0,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
# Intended Use & Limitations
This model is intended for research purposes only, as an exploration into enhancing the reasoning capabilities of smaller language models (4B) in a specialized domain.
# ⚠️ Disclaimer:
This model is NOT a medical device and should NEVER be used for actual medical diagnosis, treatment, or advice.
The outputs are generated by an AI and may contain inaccuracies or hallucinations, and they reflect the significant biases present in the training data.
Always consult a qualified healthcare professional for any medical concerns.
# Example Prompts and Outputs
Here are a couple of examples demonstrating the model's reasoning capabilities on complex cases with diagnostic traps.
## Question
```
A 58-year-old overweight male with a history of hypertension presents to the emergency department with a 12-hour history of excruciating pain, swelling, and redness in his right great toe. He states the pain started abruptly overnight and is so severe that even the bedsheet touching his toe is unbearable. He admits to drinking several beers and eating a large steak dinner the night before. His medications include hydrochlorothiazide for his blood pressure.
On examination, he has a low-grade fever of 38.1°C (100.6°F). The first metatarsophalangeal (MTP) joint of his right foot is markedly swollen, erythematous, warm, and exquisitely tender to palpation.
Initial laboratory results show:
CBC: White blood cell count is elevated at 13.5 x 10^9/L.
Serum Uric Acid: Markedly elevated.
CRP: Elevated.
What is the most likely diagnosis? What specific findings from his history and medications are significant risk factors for this condition? Finally, what is the gold standard for a definitive diagnosis, and what would it reveal?
```
## Answer
```
<start_working_out>
Alright, let's think this through. We have a 58-year-old man who's overweight and has a history of hypertension.
He's come in with excruciating pain, swelling, and redness in his right great toe.
The pain started abruptly overnight and is so severe that even the bedsheet touching his toe is unbearable.
He also mentions drinking several beers and eating a large steak dinner the night before.On examination,
he has a low-grade fever of 38.1°C and the first metatarsophalangeal (MTP) joint of his right foot is markedly swollen, erythematous, warm, and exquisitely tender to palpation.
His lab results show an elevated white blood cell count, a markedly elevated serum uric acid, and an elevated CRP.
These findings are pointing towards an inflammatory process, likely related to uric acid.Given his symptoms and lab results, the most likely diagnosis is gout.
Gout is characterized by the deposition of monosodium urate crystals in joints, often affecting the first MTP joint.
The sudden onset of severe pain, the presence of swelling and redness, and the elevated uric acid levels are all classic signs of gout.
Now, let's consider the risk factors. His history of hypertension and being overweight are significant risk factors for gout.
Obesity and hypertension are both associated with increased uric acid levels, which can lead to the formation of uric acid crystals and subsequent gout attacks.
The gold standard for a definitive diagnosis of gout is a joint aspiration. This procedure involves taking a sample of synovial fluid from the affected joint and examining it under a microscope for the presence of monosodium urate crystals.
The presence of these crystals confirms the diagnosis of gout.So, in summary, the most likely diagnosis is gout, and the joint aspiration would reveal the presence of monosodium urate crystals, confirming the diagnosis.
<end_working_out>
<SOLUTION>The most likely diagnosis for the 58-year-old male patient is gout.
The presentation of excruciating pain, swelling, redness, and warmth in the first metatarsophalangeal (MTP) joint of the right foot,
along with the abrupt onset of symptoms, elevated white blood cell count, markedly elevated serum uric acid, and elevated CRP,
are all characteristic of gout.Significant risk factors for this condition include his history of hypertension and being overweight.
Obesity and hypertension are both associated with increased uric acid levels, which can lead to the formation of uric acid crystals and subsequent gout attacks.
The gold standard for a definitive diagnosis of gout is a joint aspiration. This procedure involves taking a sample of synovial fluid from the affected joint and examining it under a microscope for the presence of monosodium urate crystals.
The presence of these crystals confirms the diagnosis of gout.In a joint aspiration, the presence of monosodium urate crystals would be revealed,
confirming the diagnosis of gout.
</SOLUTION>
```
## Question
```
A 48-year-old female presents to her primary care physician stating, "I just feel empty." She reports a 4-month period of pervasive low mood. She has lost interest in her hobbies, including gardening and book club, which she used to love.
She describes significant fatigue, stating she feels "exhausted all the time" despite sleeping more than usual.
She has difficulty concentrating at her job as an accountant and has made several simple errors recently. Her appetite has decreased, and she has had an unintentional weight loss of 10 lbs. She expresses feelings of worthlessness, saying "I feel like a failure as a wife and mother."
She denies any specific plans to harm herself but admits to sometimes "wishing she wouldn't wake up in the morning."
Her past medical history is unremarkable. A physical examination and basic metabolic panel are entirely within normal limits.
What is the most likely diagnosis for this patient? Based on the DSM-5 criteria, list at least five key symptoms from her history that support this diagnosis.
Finally, what is a crucial next step in assessing this patient's immediate safety?
```
## Answer
```
<start_working_out>
Alright, let's think about this. A 48-year-old woman is feeling empty and low in mood for the past four months.
She's lost interest in things she used to enjoy, like gardening and book club. That's a big red flag for depression.
She's also feeling really tired, even though she's sleeping more.
That's another classic symptom of depression.
And she's having trouble concentrating at work, which is impacting her performance.
She's lost her appetite and has lost weight unintentionally.
That's concerning and could be a sign of depression.She's feeling worthless and says she feels like a failure as a wife and mother.
That's a sign of low self-esteem, which is often associated with depression.She mentions wishing she wouldn't wake up in the morning.
That's a sign of suicidal ideation, which is a serious concern.
Her past medical history is unremarkable, and her physical exam and basic metabolic panel are normal.
This suggests the symptoms are likely psychological rather than physical.Based on the DSM-5 criteria,
she has five key symptoms: persistent low mood, loss of interest in activities, fatigue, difficulty concentrating, and feelings of worthlessness.
Given these symptoms, the most likely diagnosis is Major Depressive Disorder.Now, it's crucial to assess her immediate safety.
She admits to suicidal thoughts, which is a significant risk factor.The next step is to conduct a thorough safety assessment.
This involves asking direct questions about her suicidal thoughts, plans, and intent.
It's important to have a safety plan in place and to ensure she has access to mental health support.
This patient needs immediate attention to ensure her safety and well-being.
<end_working_out>
<SOLUTION>The most likely diagnosis for this patient is Major Depressive Disorder, based on the DSM-5 criteria.
The patient presents with several key symptoms that align with this diagnosis:
1. **Persistent low mood:** The patient reports feeling "empty" and experiencing pervasive low mood for the past four months.
2. **Loss of interest in activities:** She has lost interest in her hobbies, including gardening and book club, which she used to enjoy.
3. **Fatigue:** She describes feeling "exhausted all the time," even though she is sleeping more than usual.
4. **Difficulty concentrating:** She has difficulty concentrating at her job as an accountant, leading to simple errors.
5. **Feelings of worthlessness:** She expresses feelings of worthlessness, stating "I feel like a failure as a wife and mother."
In addition to these symptoms, the patient's unintentional weight loss of 10 lbs and suicidal ideation (wishing she wouldn't wake up in the morning) further support the diagnosis of Major Depressive Disorder.
A crucial next step in assessing this patient's immediate safety is to conduct a thorough safety assessment.
This involves asking direct questions about her suicidal thoughts, plans, and intent.
It is also important to have a safety plan in place and to ensure she has access to mental health support.
This assessment will help determine the level of risk and guide appropriate interventions to ensure her safety and well-being.
</SOLUTION>
```
# Uploaded finetuned model
- **Developed by:** lastmass
- **License:** apache-2.0
- **Finetuned from model :** unsloth/medgemma-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
InfiX-ai/InfiGFusion-14B
|
InfiX-ai
| 2025-08-07T09:01:09Z | 0 | 1 | null |
[
"pytorch",
"phi3",
"custom_code",
"arxiv:2505.13893",
"license:mit",
"region:us"
] | null | 2025-06-01T14:27:41Z |
---
license: mit
---
<h1 align="center">
InfiGFusion: Graph-on-Logits Distillation via Efficient Gromov-Wasserstein for Model Fusion
</h1>
<h4 align="center">
[](https://arxiv.org/pdf/2505.13893)
[](https://github.com/Reallm-Labs/InfiGFusion/edit/main/README.md)
[](https://huggingface.co/papers/2505.13893)
</h4>
**InfiGFusion** is the first structure-aware fusion framework for large language models that models semantic dependencies among logits using feature-level graphs. We introduce a novel Graph-on-Logits Distillation (GLD) loss that captures cross-dimension interactions via co-activation graphs and aligns them using an efficient, provable approximation of Gromov-Wasserstein distance (reducing complexity from O(n^4) to O(nlogn)). Our released **InfiGFusion-14B** model consistently shows better performance, achieving +35.6 on Multistep Arithmetic and +37.06 on Causal Judgement over SFT, demonstrating superior multi-step and complex logic inference.
## 🎉 News
🎉 The ckpt model, InfiGFusion-14B, has been released on Huggingface! ! !
## 🎨 Fusion Framework

## 📕 Model Summary
| | |
|-------------------------|-------------------------------------------------------------------------------|
| **Developers** | Reallm-Labs |
| **Description** | InfiGFusion is an open fusion model series designed to fuse multiple domain LLMs into a single LLM. It excels in multi-step and relational inference, enabling robust performance across complex reasoning tasks.|
| **Architecture** | 14B parameters, dense decoder-only Transformer model |
| **Inputs** | Text, best suited for prompts in the chat format |
| **Max Context length** | 16K tokens |
| **Fusing input length** | 4K tokens |
| **Fusing time** | 195 hours |
| **Fusing data** | 520M tokens |
| **Outputs** | Generated text in response to input |
| **Status** | Static model trained on an offline dataset |
| **License** | MIT |
## 🩺 Intended Use
| | |
| -------------------------- | ------------------|
| **Primary Use Cases** | `InfiGFusion` is designed to accelerate research on language model fusion and serve as a foundation for generative AI-powered features. It is suitable for building general-purpose AI systems and applications (primarily in English), especially in scenarios that require:<br><br>1. Operation in memory- or compute-constrained environments.<br>2. Low-latency inference.<br>3. Advanced reasoning and logical inference.|
| **Out-of-Scope Use Cases** | `InfiGFusion` is not specifically optimized or evaluated for all downstream tasks. As such:<br><br>1. Developers should consider the general limitations of language models and carefully evaluate performance, safety, and fairness before deploying in sensitive or high-stakes applications.<br>2. Use of the model must comply with all applicable laws and regulations (e.g., data privacy, export controls), particularly given its English-language focus.<br>3. This Model Card does not alter or restrict the terms of the model’s open-source license. |
## 💼 Data Overview
### 📚 Training Data
We construct a novel multi-task training dataset comprising **130k curated examples** across three major domains: **general reasoning**, **mathematics**, and **code generation**.
1. **General Reasoning (52K samples)**
Samples are sourced from the [Infinity-Instruct](https://huggingface.co/datasets/BAAI/Infinity-Instruct) dataset, a high-quality instruction-following corpus created through expert filtering.
2. **Mathematics (39K samples)**
Questions are drawn from the [NuminaMath-1.5](https://huggingface.co/datasets/AI-MO/NuminaMath-1.5) dataset—an advanced benchmark for competition-level math spanning Algebra, Geometry, Combinatorics, Calculus, Inequalities, Logic & Puzzles, and Number Theory.
Answers are distilled from the [DeepSeek-R1-671B](https://huggingface.co/datasets/a-m-team/AM-DeepSeek-R1-Distilled-1.4M) model by the AM team.
3. **Code Generation (39K samples)**
We used [KodCode-V1-SFT-R1](https://huggingface.co/mlfoundations-dev/KodCode-V1-SFT-R1_300k_batch_size_512), a dataset with 268K code samples. Each example was processed by our pivot model to generate five completions. These were sandbox-evaluated, and samples where at least one generation failed were flagged. From these, we filtered and distilled 39K high-quality examples.
| **Type** | **General** | **Math** | **Code** |
| ----------------- | ----------------- | -------------- | ----------------- |
| **Dataset** | Infinity-Instruct | NuminaMath-1.5 | KodCode-V1-SFT-R1 |
| **Original Size** | 1.4M | 1.4M | 268K |
| **Filtered Size** | 52K | 39K | 39K |
#### Benchmark evaluation
To enhance the robustness of answer extraction under the regex-based evaluation framework of[OpenCompass](https://github.com/open-compass/opencompass) and [EvalPlus](https://github.com/evalplus/evalplus), we systematically refine the prompts used in several benchmark datasets. These tailored prompt formats are designed to facilitate precise output matching, mitigating ambiguities that often arise from model generations. The revised prompt templates corresponding to each dataset are presented in the following Table, which details how task instructions and answer formats are standardized to align with OpenCompass's automatic evaluation pipeline.
For datasets such as TheoremQA and HumanEval, we retain the original prompt configurations, adhering to their respective community-adopted evaluation protocols. This ensures consistency with prior works and preserves the validity of established benchmarks.
For MBPP, we utilize EvalPlus for a more rigorous assessment of LLM-generated code, providing enhanced reliability in functional correctness evaluation, more specifically:
| **Benchmark** | **Prompt Format** |
| ------------- | ----------------------------------|
| **IFEval** | `{prompt}\nPlease directly give the correct answer:` |
| **ARC-C** | `Question: {question}\nA. {textA}\nB. {textB}\nC. {textC}\nD. {textD}\nDirectly give me the correct answer option, and then explain:` |
| **Hellaswag** | `{ctx}\nQuestion: Which ending makes the most sense?\nDirectly give me the correct choice, you can further explain it or not.\nA. {A}\nB. {B}\nC. {C}\nD. {D}\nYou may choose from 'A', 'B', 'C', 'D'.\nAnswer:` |
| **BBH** | `Follow the given examples and answer the question.\n{_hint}\nQ: {input}\nA: Let's think step by step.` |
| **DROP** | `You will be asked to read a passage and answer a question. Some examples of passages and Q&A are provided below.\n{drop_examples}\n\n# Your Task\n---\n{prompt}\nThink step by step, then write a line of the form "Answer: $ANSWER" at the end of your response.` |
| **MMLU** | `{_hint}\nQuestion: {input}\nA. {A}\nB. {B}\nC. {C}\nD. {D}\n\nFor simple problems:\nDirectly provide the answer with minimal explanation.\n\nFor complex problems:\nUse this step-by-step format:\n## Step 1: [Concise description]\n[Brief explanation]\n## Step 2: [Concise description]\n[Brief explanation]\n\nRegardless of the approach, always conclude with:\nThe answer is [the_answer_letter].\nwhere the [the_answer_letter] is one of A, B, C or D.\n\nLet's think step by step.` |
| **GSM8K** | `{question}\nPlease reason step by step, and put your final answer within \boxed{}.` |
| **MATH** | `{problem}\nPlease reason step by step, and put your final answer within \boxed{}.` |
## 🚀 Usage
### Input Formats
Our fusion process uses [phi-4](https://huggingface.co/microsoft/phi-4) as the **pivot model**, and thus **InfiGFusion shares the same prompt format and usage style** as `phi-4`.
Given the nature of the training data, `InfiGFusion` performs best when used with prompts in the following chat-style format:
```bash
<|im_start|>system<|im_sep|>
You are a medieval knight and must provide explanations to modern people.<|im_end|>
<|im_start|>user<|im_sep|>
How should I explain the Internet?<|im_end|>
<|im_start|>assistant<|im_sep|>
```
### With `transformers`
```python
import transformers
pipeline = transformers.pipeline(
"text-generation",
model="InfiGFusion", # replace with actual model path
model_kwargs={"torch_dtype": "auto"},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a medieval knight and must provide explanations to modern people."},
{"role": "user", "content": "How should I explain the Internet?"},
]
outputs = pipeline(messages, max_new_tokens=1024)
print(outputs[0]["generated_text"][-1])
```
> 📌 *Note: Since InfiGFusion uses `phi-4` as its pivot model during fusion, it inherits many of its usage patterns and prompt compatibility features.*
## 🎯 Model Quality

## ✅ Responsible AI Considerations
Like other large language models, `InfiGFusion` may exhibit behaviors that raise concerns around fairness, safety, and reliability. While our fusion framework enhances reasoning and relational inference, the foundation models it integrates—including the pivot model `phi-4`—carry inherited limitations. Users should be aware of the following considerations:
* **Language Coverage & Bias:**
InfiGFusion is primarily trained and evaluated on English datasets. Its performance on non-English inputs may be degraded. Moreover, any biases or stereotypes present in the underlying models or datasets may be preserved or amplified through the fusion process.
* **Representation of Harms & Perpetuation of Stereotypes:**
The fused models may over- or under-represent certain groups or reinforce societal stereotypes. Although quality filters and alignment procedures are used, they cannot fully eliminate harmful representations due to real-world imbalances in the data sources.
* **Content Safety:**
The model may generate inappropriate, offensive, or unsafe content, especially in unconstrained or adversarial inputs. It is not recommended for use in sensitive domains (e.g., mental health, legal advice) without additional safeguards.
* **Fused Behavior Complexity:**
Due to the fusion of multiple distinct models, `InfiGFusion` may exhibit complex or emergent behaviors not present in any single model. This makes interpretability and debugging more challenging, especially in high-risk applications.
* **Factuality & Hallucination:**
Like other generative models, `InfiGFusion` may produce convincing yet factually incorrect or outdated content. Developers should not rely on model output as a source of truth and are encouraged to integrate verification mechanisms such as Retrieval-Augmented Generation (RAG).
* **Code Generation Limitations:**
While `InfiGFusion` includes training on code-related datasets, it may still produce invalid, unsafe, or incomplete code snippets. Outputs involving unfamiliar libraries or languages should be carefully reviewed before use.
---
### Developer Responsibilities
Developers using `InfiGFusion` are encouraged to:
* **Evaluate outputs contextually**, particularly for fairness, accuracy, and safety.
* **Follow all applicable laws and regulations**, including those relating to privacy, trade compliance, and data use.
* **Avoid deployment in high-stakes decision-making** (e.g., employment, finance, law enforcement) without extensive validation and domain-specific safeguards.
* **Clearly disclose to users** that they are interacting with an AI system, following transparency and responsible AI best practices.
By using this model, you agree to evaluate and manage risks responsibly and ensure your applications align with ethical and regulatory expectations.
## 🗒️ BibTex Citation
If you find this work helpful, feel free to give us a cite.
```bigquery
@article{wang2025infigfusion,
title={InfiGFusion: Graph-on-Logits Distillation via Efficient Gromov-Wasserstein for Model Fusion},
author={Wang, Yuanyi and Yan, Zhaoyi and Zhang, Yiming and Zhou, Qi and Gu, Yanggan and Wu, Fei and Yang, Hongxia},
journal={arXiv preprint arXiv:2505.13893},
year={2025}
}
```
|
Anzhc/MS-LC-EQ-D-VR_VAE
|
Anzhc
| 2025-08-07T08:26:03Z | 2,745 | 30 |
diffusers
|
[
"diffusers",
"arxiv:2502.09509",
"arxiv:2506.07863",
"base_model:stabilityai/sdxl-vae",
"base_model:finetune:stabilityai/sdxl-vae",
"region:us"
] | null | 2025-07-15T22:12:46Z |
---
base_model:
- stabilityai/sdxl-vae
library_name: diffusers
---
# MS-LC-EQ-D-VR VAE: another reproduction of EQ-VAE on variable VAEs and then some
### Current VAEs present:
- SDXL VAE
- FLUX VAE
EQ-VAE paper: https://arxiv.org/abs/2502.09509 <br>
VIVAT paper: https://arxiv.org/pdf/2506.07863v1 <br>
Thanks to Kohaku and his reproduction that made me look into this: https://huggingface.co/KBlueLeaf/EQ-SDXL-VAE <br>

Latent to PCA <br>
**IMPORTANT**: This VAE requires reflection padding on conv layers. It should be added both in your trainer, and your webui.
You can do it with this function on VAE model:
```
for module in self.model.modules():
if isinstance(module, nn.Conv2d):
pad_h, pad_w = module.padding if isinstance(module.padding, tuple) else (module.padding, module.padding)
if pad_h > 0 or pad_w > 0:
module.padding_mode = "reflect"
```
If you have trained without this - don't worry, just add this modification and do a small tune to fix up artefacts on edges.
(left - padded, right - not)

## Introduction
Refer to https://huggingface.co/KBlueLeaf/EQ-SDXL-VAE for introduction to EQ-VAE.
This implementation additionally utilizes some of fixes proposed in VIVAT paper, and custom in-house regularization techniques, as well as training implementation.
For additional examples and more information refer to: https://arcenciel.io/articles/20 and https://arcenciel.io/models/10994
## Visual Examples

## Usage
This is a finetuned SDXL VAE, adapted with new regularization, and other techniques. You can use this with your existing SDXL model, but image will be quite artefacting, particularly - oversharpening and ringing.
This VAE is supposed ot be used for finetune, after that images will become normal. But be aware, compatibility with old VAEs, that are not EQ, will be lost(They will become blurry).
## Training Setup
#### Base SDXL:
* Base Model: [SDXL-VAE](https://huggingface.co/stabilityai/sdxl-vae)
* Resolution: 256
* Dataset: ~12.8k anime images
* Batch Size: 128 (bs 8, grad acc 16)
* Samples Seen: ~75k
* Loss Weights:
* L1: 0.3
* L2: 0.5
* SSIM: 0.5
* LPIPS: 0.5
* KL: 0.000001
* Consistency Loss: 0.75
Both Encoder and Decoder were trained.
**Training Time**: ~8-10 hours on **4060Ti**
#### B2:
* Base Model: First version
* Resolution: 256
* Dataset: 87.8k anime images
* Batch Size: 128 (bs 8, grad acc 16)
* Samples Seen: ~150k
* Loss Weights:
* L1: 0.2
* L2: 0.4
* SSIM: 0.6
* LPIPS: 0.8
* KL: 0.000001
* Consistency Loss: 0.75
Both Encoder and Decoder were trained.
**Training Time**: ~16 hours on **4060Ti**
#### B3:
* Base Model: B2
* Resolution: 256
* Dataset: 162.8k anime images
* Batch Size: 128 (bs 8, grad acc 16)
* Samples Seen: ~225k
* Loss Weights:
* L1: 0.2
* L2: 0.4
* SSIM: 0.6
* LPIPS: 0.8
* KL: 0.000001
* Consistency Loss: 0.75
Both Encoder and Decoder were trained.
**Training Time**: ~24 hours on **4060Ti**
#### B4:
* Base Model: B3
* Resolution: 320
* Dataset: ~237k anime images
* Batch Size: 72 (bs 6, grad acc 12)
* Samples Seen: ~300k
* Loss Weights:
* L1: 0.5
* L2: 0.9
* SSIM: 0.6
* LPIPS: 0.7
* KL: 0.000001
* Consistency Loss: 0.75
* wavelet: 0.3
Both Encoder and Decoder were trained.
**Total Training Time**: ~33 hours on **4060Ti**
B2 is a direct continuation of base version, stats displayed are cumulative across multiple runs.
I took batch of 75k images, so samples seen never repeated.
B3 repeats B2 for another batch of data and further solidifies cleaner latents. Minor tweaks were done to training code for better regularization.
B4 changes mixture a bit, to concentrate more on reconstruction quality. Additionally, resolution was increased to 320. Wavelet loss was added at low values(but it's effect is yet to be studied).
---
#### Base FLUX:
* Base Model: [FLUX-VAE](https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/vae)
* Dataset: ~12.8k anime images
* Batch Size: 128 (bs 8, grad acc 16)
* Samples Seen: ~62.5k
* Loss Weights:
* L1: 0.3
* L2: 0.4
* SSIM: 0.6
* LPIPS: 0.6
* KL: 0.000001
* Consistency Loss: 0.75
Both Encoder and Decoder were trained.
**Training Time**: ~6 hours on **4060Ti**
## Evaluation Results
Im using small test set i have on me, separated into anime(434) and photo(500) images. Additionally, im measuring noise in latents. Sorgy for no larger test sets.
### Results on small benchmark of 500 photos
| VAE SDXL | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS-SSIM ↑ | KL ↓ | RFID ↓ |
|---------------------------------------|----------------------------------|-------------------------------------|-------------------------------------|-------------------------------------|-------------------------------------|-------------------------------------|--------------------------------------|
| sdxl_vae | 6.282 | 10.534 | 29.278 | <span style="color:Crimson">0.063</span> | 0.947 | <span style="color:Crimson">31.216</span> | <span style="color:Crimson">4.819</span> |
| Kohaku EQ-VAE | 6.423 | 10.428 | 29.140 | <span style="color:Orange">*0.082*</span> | 0.945 | 43.236 | 6.202 |
| Anzhc MS-LC-EQ-D-VR VAE | <span style="color:Orange">*5.975*</span> | <span style="color:Orange">*10.096*</span> | <span style="color:Orange">*29.526*</span> | 0.106 | <span style="color:Orange">*0.952*</span> | <span style="color:Orange">*33.176*</span> | 5.578 |
| Anzhc MS-LC-EQ-D-VR VAE B2 | 6.082 | 10.214 | 29.432 | 0.103 | 0.951 | 33.535 | 5.509 |
| Anzhc MS-LC-EQ-D-VR VAE B3 | 6.066 | 10.151 | 29.475 | 0.104 | 0.951 | 34.341 | 5.538 |
| Anzhc MS-LC-EQ-D-VR VAE B4 | <span style="color:Crimson">5.839</span> | <span style="color:Crimson">9.818</span> | <span style="color:Crimson">29.788</span> | 0.112 | <span style="color:Crimson">0.954</span> | 35.762 | <span style="color:Orange">*5.260*</span> |
| VAE FLUX | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS‑SSIM ↑ | KL ↓ | rFID ↓ |
|---|---|---|---|---|---|---|---|
| FLUX VAE | <span style="color:Orange">*4.147* | <span style="color:Orange">*6.294* | <span style="color:Orange">*33.389* | <span style="color:Crimson">0.021 | <span style="color:Crimson">0.987 | <span style="color:Orange">*12.146* | <span style="color:Crimson">0.565 |
| MS‑LC‑EQ‑D‑VR VAE FLUX | <span style="color:Crimson">3.799 | <span style="color:Crimson">6.077 | <span style="color:Crimson">33.807 | <span style="color:Orange">*0.032* | <span style="color:Orange">*0.986* | <span style="color:Crimson">10.992 | <span style="color:Orange">*1.692* |
#### Noise in latents
| VAE SDXL | Noise ↓ |
|-----------------------------------------|------------------------------------|
| sdxl_vae | 27.508 |
| Kohaku EQ-VAE | 17.395 |
| Anzhc MS-LC-EQ-D-VR VAE | 15.527 |
| Anzhc MS-LC-EQ-D-VR VAE B2 | 13.914 |
| Anzhc MS-LC-EQ-D-VR VAE B3 | <span style="color:Orange">*13.124*</span> |
| Anzhc MS-LC-EQ-D-VR VAE B4 | <span style="color:Crimson">12.354</span> |
| VAE FLUX | Noise ↓ |
|---|---|
| FLUX VAE | <span style="color:Orange">*10.499* |
| MS‑LC‑EQ‑D‑VR VAE FLUX | <span style="color:Crimson">7.635 |
---
### Results on a small benchmark of 434 anime arts
| VAE SDXL | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS-SSIM ↑ | KL ↓ | RFID ↓ |
|-----------------------------------------|----------------------------------|---------------------------------------|---------------------------------------|-------------------------------------|--------------------------------------|--------------------------------------|------------------------------------|
| sdxl_vae | 4.369 | 7.905 | <span style="color:Orange">*31.080*</span> | <span style="color:Crimson">0.038</span> | 0.969 | <span style="color:Crimson">35.057</span> | <span style="color:Crimson">5.088</span> |
| Kohaku EQ-VAE | 4.818 | 8.332 | 30.462 | <span style="color:Orange">*0.048*</span> | 0.967 | 50.022 | 7.264 |
| Anzhc MS-LC-EQ-D-VR VAE | 4.351 | <span style="color:Orange">*7.902*</span> | 30.956 | 0.062 | 0.970 | <span style="color:Orange">*36.724*</span> | 6.239 |
| Anzhc MS-LC-EQ-D-VR VAE B2 | <span style="color:Orange">*4.313*</span> | 7.935 | 30.951 | 0.059 | <span style="color:Orange">*0.970*</span> | 36.963 | 6.147 |
| Anzhc MS-LC-EQ-D-VR VAE B3 | 4.323 | 7.910 | 30.977 | 0.058 | 0.970 | 37.809 | 6.075 |
| Anzhc MS-LC-EQ-D-VR VAE B4 | <span style="color:Crimson">4.140</span> | <span style="color:Crimson">7.617</span> | <span style="color:Crimson">31.343</span> | 0.058 | <span style="color:Crimson">0.971</span> | 39.057 | <span style="color:Orange">*5.670*</span> |
| VAE FLUX | L1 ↓ | L2 ↓ | PSNR ↑ | LPIPS ↓ | MS‑SSIM ↑ | KL ↓ | rFID ↓ |
|---|---|---|---|---|---|---|---|
| FLUX VAE | <span style="color:Orange">*3.060* | <span style="color:Crimson">4.775 | <span style="color:Crimson">35.440 | <span style="color:Crimson">0.011 | <span style="color:Crimson">0.991 | <span style="color:Orange">*12.472* | <span style="color:Crimson">0.670 |
| MS‑LC‑EQ‑D‑VR VAE FLUX | <span style="color:Crimson">2.933 | <span style="color:Orange">*4.856* | <span style="color:Orange">*35.251* | <span style="color:Orange">*0.018* | <span style="color:Orange">*0.990* | <span style="color:Crimson">11.225 | <span style="color:Orange">*1.561* |
#### Noise in latents
| VAE SDXL | Noise ↓ |
|-----------------------------------------|------------------------------------|
| sdxl_vae | 26.359 |
| Kohaku EQ-VAE | 17.314 |
| Anzhc MS-LC-EQ-D-VR VAE | 14.976 |
| Anzhc MS-LC-EQ-D-VR VAE B2 | 13.649 |
| Anzhc MS-LC-EQ-D-VR VAE B3 | <span style="color:Orange">*13.247*</span> |
| Anzhc MS-LC-EQ-D-VR VAE B4 | <span style="color:Crimson">12.652</span> |
| VAE FLUX | Noise ↓ |
|---|---|
| FLUX VAE | <span style="color:Orange">*9.913* |
| MS‑LC‑EQ‑D‑VR VAE FLUX | <span style="color:Crimson">7.723 |
KL loss suggests that this VAE implementation is much closer to SDXL, and likely will be a better candidate for further finetune, but that is just a theory.
B2 further improves latent clarity, while maintaining same or better performance. Particularly improved very fine texture handling, which previously would be overcorrected into smooth surface. Performs better in such cases now.
B3 cleans them up ever more, but at that point visually they are +- same.
## References
[1] [[2502.09509] EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling](https://arxiv.org/abs/2502.09509)
[2] [[2506.07863] VIVAT: VIRTUOUS IMPROVING VAE TRAINING THROUGH ARTIFACT MITIGATION](https://arxiv.org/abs/2506.07863v1)
[3] [sdxl-vae](https://huggingface.co/stabilityai/sdxl-vae)
## Cite
```bibtex
@misc{anzhc_ms-lc-eq-d-vr_vae,
author = {Anzhc},
title = {MS-LC-EQ-D-VR VAE: another reproduction of EQ-VAE on cariable VAEs and then some},
year = {2025},
howpublished = {Hugging Face model card},
url = {https://huggingface.co/Anzhc/MS-LC-EQ-D-VR_VAE},
note = {Finetuned SDXL-VAE with EQ regularization and more, for improved latent representation.}
}
```
## Acknowledgement
My friend Bluvoll, for no particular reason.
|
TAUR-dev/M-test_all_parts-sft
|
TAUR-dev
| 2025-08-07T08:14:31Z | 35 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-05T11:20:51Z |
# M-test_all_parts-sft
This model was created as part of the **test_all_parts** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: test_all_parts
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_cd3arg_Qwen2_5_1_5B_Instruct_AnsRev_think", "template": "qwen", "cutoff_len": 16384, "max_samples": 50, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/datastor1/mwadhwa/tmp/sf/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 5, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__test_all_parts__v1", "sf_eval_before_training": false, "sf_wandb_project": "test_all_parts_sft", "sf_eval_steps": null, "run_name": "test_all_parts_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__test_all_parts__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-test_all_parts-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-test_all_parts-sft")
```
|
medoxz543/hate-speech
|
medoxz543
| 2025-08-07T07:08:01Z | 41 | 0 | null |
[
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:Machlovi/Hatebase",
"dataset:manueltonneau/english-hate-speech-superset",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-04-24T06:51:39Z |
---
license: apache-2.0
datasets:
- Machlovi/Hatebase
- manueltonneau/english-hate-speech-superset
language:
- en
metrics:
- accuracy
base_model:
- vinai/bertweet-base
pipeline_tag: text-classification
---
|
overfit-brothers/KRAG-SOTA
|
overfit-brothers
| 2025-08-07T05:56:32Z | 2 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-04T01:48:46Z |
# 입력 텍스트 예시
아래는 저희 팀에서 구축한 RAG 파이프라인으로 완성되는 모델 입력 텍스트의 예시입니다.
```python
"""다음 글은 어문 규범에 맞는 표현을 고르거나 어문 규범에 따라 문장을 교정하고 그 이유를 설명할 수 있는 자료이다
2. 자음 뒤에 [w]가 올 때에는 두 음절로 갈라 적되, [gw], [hw], [kw]는 한 음절로 붙여
적는다.
- swing[swiŋ] 스윙, twist[twist] 트위스트, penguin[peŋgwin] 펭귄, whistle[hwisl] 휘슬,quarter[kwɔːtə] 쿼터
다음은 질문 유형과 답변 형식을 이해하는 데 도움이 되는 여러 예시입니다. 이 예시들의 내용에 얽매이지 않고, 주어진 질문에 대해 가장 적절한 답변을 생성하세요.
question:""네덜란드의 {헤이그/하흐}에 가 봤다."" 가운데 올바른 것을 선택하고, 그 이유를 설명하세요.
answer:""네덜란드의 헤이그에 가 봤다.""가 옳다. 원지음이 아닌 제3국의 발음으로 통용되고 있는 것은 관용을 따른다. 따라서 '헤이그'로 쓰는 것이 적절하다.
**위 'answer:' 뒤의 출력 형식을 엄격히 준수하여 답변을 생성하십시오.**
이제 질문을 시작합니다.
question:""{히치하이크/힛치하이크}를 하다."" 가운데 올바른 것을 선택하고, 그 이유를 설명하세요."""
```
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# 0725_5-merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* /home/infidea/rebirth-hjun/KRAG_2025/modle_merge/kanana-1.5-8b-instruct-2505-lora-20250715-1532
* /home/infidea/rebirth-hjun/KRAG_2025/modle_merge/0725_4-merge
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: /home/infidea/rebirth-hjun/KRAG_2025/modle_merge/0725_4-merge
dtype: bfloat16
merge_method: slerp
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: /home/infidea/rebirth-hjun/KRAG_2025/modle_merge/0725_4-merge
- layer_range: [0, 32]
model: /home/infidea/rebirth-hjun/KRAG_2025/modle_merge/kanana-1.5-8b-instruct-2505-lora-20250715-1532
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.7, 0.9, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.5, 0.0]
- value: 0.8
```
|
Fugaki/RecurrentGemma_IndonesiaSummarizerNews
|
Fugaki
| 2025-08-07T05:52:27Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T05:33:18Z |
---
license: apache-2.0
---
|
meandyou200175/intent_1tg_fix
|
meandyou200175
| 2025-08-07T04:25:22Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T03:49:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Thireus/GLM-4.5-THIREUS-IQ1_KT-SPECIAL_SPLIT
|
Thireus
| 2025-08-07T04:16:20Z | 7 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-03T09:02:17Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
x2bee/Polar-oss-20B
|
x2bee
| 2025-08-07T03:19:55Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-07T03:09:42Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single H100 GPU and the `gpt-oss-20b` model run within 16GB of memory.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-20b
lms get openai/gpt-oss-20b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
|
Trelis/Qwen3-4B_dsarc-programs-correct-50_20250806-233716
|
Trelis
| 2025-08-07T01:31:14Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T01:29:48Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
echoboi/solar_energy-distilbert-classifier
|
echoboi
| 2025-08-07T00:58:15Z | 14 | 0 | null |
[
"text-classification",
"multilabel-classification",
"housing",
"climate-change",
"sustainability",
"solar-energy",
"en",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-05T18:41:27Z |
---
language: en
tags:
- text-classification
- multilabel-classification
- housing
- climate-change
- sustainability
- solar-energy
license: mit
---
# Solar Energy Classifier (Distilbert)
This model classifies content related to solar power on climate change subreddits.
## Model Details
- Model Type: Distilbert
- Task: Multilabel text classification
- Sector: Solar Energy
- Base Model: Distilbert base uncased
- Labels: 7
- Training Data: Sample from 1000 GPT 4o-mini-labeled Reddit posts from climate subreddits (2010-2023)
## Labels
The model predicts 7 labels simultaneously:
1. **Decommissioning And Waste**: Talks about end-of-life panel/turbine disposal, recycling, landfill issues.
2. **Foreign Dependence And Trade**: References Chinese panel dominance, tariffs, trade wars, or reshoring supply chains.
3. **Grid Stability And Storage**: Discussions of intermittency, batteries, pumped hydro, or grid reliability with high renewables.
4. **Land Use**: Raises land-area or space requirements, farmland loss, or siting footprint of solar/wind.
5. **Local Economy**: Claims solar/wind projects create or harm local jobs, investment, or economic growth.
6. **Subsidy And Tariff Debate**: Argues over feed-in-tariffs, net-metering rules or subsidy fairness.
7. **Utility Bills**: Mentions household or community electricity bills going up or down due to solar/wind.
Note: Label order in predictions matches the order above.
## Usage
```python
import torch, sys, os, tempfile
from transformers import DistilBertTokenizer
from huggingface_hub import snapshot_download
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def print_sorted_label_scores(label_scores):
# Sort label_scores dict by score descending
sorted_items = sorted(label_scores.items(), key=lambda x: x[1], reverse=True)
for label, score in sorted_items:
print(f" {label}: {score:.6f}")
# Model link and examples for this specific model
model_link = 'sanchow/solar_energy-distilbert-classifier'
examples = [
"Solar panels on rooftops can significantly reduce electricity bills."
]
print(f"\n{'='*60}")
print("MODEL: SOLAR ENERGY SECTOR")
print(f"{'='*60}")
print(f"Downloading model: {model_link}")
with tempfile.TemporaryDirectory() as temp_dir:
snapshot_download(
repo_id=model_link,
local_dir=temp_dir,
local_dir_use_symlinks=False
)
model_class_path = os.path.join(temp_dir, 'model_class.py')
if not os.path.exists(model_class_path):
print(f"model_class.py not found in downloaded files")
print(f" Available files: {os.listdir(temp_dir)}")
else:
sys.path.insert(0, temp_dir)
from model_class import MultilabelClassifier
tokenizer = DistilBertTokenizer.from_pretrained(temp_dir)
checkpoint = torch.load(os.path.join(temp_dir, 'model.pt'), map_location='cpu', weights_only=False)
model = MultilabelClassifier(checkpoint['model_name'], len(checkpoint['label_names']))
model.load_state_dict(checkpoint['model_state_dict'])
model.to(device)
model.eval()
print("Model loaded successfully")
print(f" Labels: {checkpoint['label_names']}")
print("\nSolar Energy classifier results:\n")
for i, test_text in enumerate(examples):
inputs = tokenizer(
test_text,
return_tensors="pt",
truncation=True,
max_length=512,
padding=True
).to(device)
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.cpu().numpy() if isinstance(outputs, (tuple, list)) else outputs.cpu().numpy()
label_scores = {label: float(score) for label, score in zip(checkpoint['label_names'], predictions[0])}
print(f"Example {i+1}: '{test_text}'")
print("Predictions (all label scores, highest first):")
print_sorted_label_scores(label_scores)
print("-" * 40)
```
## Performance
Best model performance:
- Micro Jaccard: 0.4106
- Macro Jaccard: 0.5228
- F1 Score: 0.8590
- Accuracy: 0.8590
Dataset: ~900 GPT-labeled samples per sector (600 train, 150 validation, 150 test)
## Optimal Thresholds
```python
optimal_thresholds = {'Decommissioning And Waste': 0.37254738295870854, 'Foreign Dependence And Trade': 0.37613221483784043, 'Grid Stability And Storage': 0.43063579501768967, 'Land Use': 0.2008681860202493, 'Local Economy': 0.3853212494245655, 'Subsidy And Tariff Debate': 0.42756546792925043, 'Utility Bills': 0.3370254357621166}
for label, score in zip(label_names, predictions[0]):
threshold = optimal_thresholds.get(label, 0.5)
if score > threshold:
print(f"{label}: {score:.3f}")
```
## Training
Trained on GPT-labeled Reddit data:
1. Data collection from climate subreddits
2. keyword based filtering for sector-specific content
3. GPT labeling for multilabel classification
4. 80/10/10 train/validation/test split
5. Fine-tuning with threshold optimization
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{solar_energy_distilbert_classifier,
title={Solar Energy Classifier for Climate Change Analysis},
author={Sandeep Chowdhary},
year={2025},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/echoboi/solar_energy-distilbert-classifier}},
}
```
## Limitations
- Trained on data from specific climate change subreddits and limited to English content
- Performance depends on GPT-generated labels
|
krajarat/TwinLlama-3.1-8B
|
krajarat
| 2025-08-07T00:37:44Z | 32 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Llama-3.1-8B",
"base_model:finetune:unsloth/Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T00:43:48Z |
---
base_model: unsloth/Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** krajarat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FluidInference/speaker-diarization-coreml
|
FluidInference
| 2025-08-06T22:20:10Z | 1,671 | 6 | null |
[
"speech",
"audio",
"voice",
"speaker-diarization",
"speaker-change-detection",
"coreml",
"speaker-segmentation",
"base_model:pyannote/speaker-diarization-3.1",
"base_model:finetune:pyannote/speaker-diarization-3.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-24T21:08:37Z |
---
license: apache-2.0
tags:
- speech
- audio
- voice
- speaker-diarization
- speaker-change-detection
- coreml
- speaker-segmentation
base_model:
- pyannote/speaker-diarization-3.1
- pyannote/wespeaker-voxceleb-resnet34-LM
---
# Speaker Diarization CoreML Models
State-of-the-art speaker diarization models optimized for Apple Neural Engine, powering real-time on-device speaker separation with research-competitive performance.
Support any language, models are trained on acoustic signatures
## Model Description
This repository contains CoreML-optimized speaker diarization models specifically converted and optimized for Apple devices (macOS 13.0+, iOS 16.0+). These models enable efficient on-device speaker diarization with minimal power consumption while maintaining state-of-the-art accuracy.
## Usage
See the SDK for more details [https://github.com/FluidInference/FluidAudio](https://github.com/FluidInference/FluidAudio)
### With FluidAudio SDK (Recommended)
Installation
Add FluidAudio to your project using Swift Package Manager:
```
dependencies: [
.package(url: "https://github.com/FluidInference/FluidAudio.git", from: "0.0.2"),
],
```
```swift
import FluidAudio
Task {
let diarizer = DiarizerManager()
try await diarizer.initialize()
let audioSamples: [Float] = // your 16kHz audio
let result = try await diarizer.performCompleteDiarization(
audioSamples,
sampleRate: 16000
)
for segment in result.segments {
print("Speaker \(segment.speakerId): \(segment.startTimeSeconds)s - \(segment.endTimeSeconds)s")
}
}
### Direct CoreML Usage
``swift
import CoreML
// Load the model
let model = try! SpeakerDiarizationModel(configuration: MLModelConfiguration())
// Prepare input (16kHz audio)
let input = SpeakerDiarizationModelInput(audioSamples: audioArray)
// Run inference
let output = try! model.prediction(input: input)
```
## Acknowledgments
These CoreML models are based on excellent work from:
sherpa-onnx - Foundational diarization algorithms
pyannote-audio - State-of-the-art diarization research
wespeaker - Speaker embedding techniques
### Key Features
- **Apple Neural Engine Optimized**: Zero performance trade-offs with maximum efficiency
- **Real-time Processing**: RTF of 0.02x (50x faster than real-time)
- **Research-Competitive**: DER of 17.7% on AMI benchmark
- **Power Efficient**: Designed for maximum performance per watt
- **Privacy-First**: All processing happens on-device
## Intended Uses & Limitations
### Intended Uses
- **Meeting Transcription**: Real-time speaker identification in meetings
- **Voice Assistants**: Multi-speaker conversation understanding
- **Media Production**: Automated speaker labeling for podcasts/interviews
- **Research**: Academic research in speaker diarization
- **Privacy-Focused Applications**: On-device processing without cloud dependencies
### Limitations
- Optimized for 16kHz audio input
- Best performance with clear audio (no heavy background noise)
- May struggle with heavily overlapping speech
- Requires Apple devices with CoreML support
### Technical Specifications
- **Input**: 16kHz mono audio
- **Output**: Speaker segments with timestamps and IDs
- **Framework**: CoreML (converted from PyTorch)
- **Optimization**: Apple Neural Engine (ANE) optimized operations
- **Precision**: FP32 on CPU/GPU, FP16 on ANE
## Training Data
These models are converted from open-source variants trained on diverse speaker diarization datasets. The original models were trained on:
- Multi-speaker conversation datasets
- Various acoustic conditions
- Multiple languages and accents
*Note: Specific training data details depend on the original open-source model variant.*
|
yonigozlan/sam2.1_hiera_large_hf
|
yonigozlan
| 2025-08-06T22:14:59Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"sam2",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-07-18T21:01:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
unsloth/Qwen3-4B-Instruct-2507-bnb-4bit
|
unsloth
| 2025-08-06T21:37:21Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T21:37:09Z |
---
tags:
- unsloth
base_model:
- Qwen/Qwen3-4B-Instruct-2507
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
---
<div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
</div>
# Qwen3-4B-Instruct-2507
<a href="https://chat.qwen.ai" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Highlights
We introduce the updated version of the **Qwen3-4B non-thinking mode**, named **Qwen3-4B-Instruct-2507**, featuring the following key enhancements:
- **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**.
- **Substantial gains** in long-tail knowledge coverage across **multiple languages**.
- **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation.
- **Enhanced capabilities** in **256K long-context understanding**.

## Model Overview
**Qwen3-4B-Instruct-2507** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 4.0B
- Number of Paramaters (Non-Embedding): 3.6B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: **262,144 natively**.
**NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Performance
| | GPT-4.1-nano-2025-04-14 | Qwen3-30B-A3B Non-Thinking | Qwen3-4B Non-Thinking | Qwen3-4B-Instruct-2507 |
|--- | --- | --- | --- | --- |
| **Knowledge** | | | |
| MMLU-Pro | 62.8 | 69.1 | 58.0 | **69.6** |
| MMLU-Redux | 80.2 | 84.1 | 77.3 | **84.2** |
| GPQA | 50.3 | 54.8 | 41.7 | **62.0** |
| SuperGPQA | 32.2 | 42.2 | 32.0 | **42.8** |
| **Reasoning** | | | |
| AIME25 | 22.7 | 21.6 | 19.1 | **47.4** |
| HMMT25 | 9.7 | 12.0 | 12.1 | **31.0** |
| ZebraLogic | 14.8 | 33.2 | 35.2 | **80.2** |
| LiveBench 20241125 | 41.5 | 59.4 | 48.4 | **63.0** |
| **Coding** | | | |
| LiveCodeBench v6 (25.02-25.05) | 31.5 | 29.0 | 26.4 | **35.1** |
| MultiPL-E | 76.3 | 74.6 | 66.6 | **76.8** |
| Aider-Polyglot | 9.8 | **24.4** | 13.8 | 12.9 |
| **Alignment** | | | |
| IFEval | 74.5 | **83.7** | 81.2 | 83.4 |
| Arena-Hard v2* | 15.9 | 24.8 | 9.5 | **43.4** |
| Creative Writing v3 | 72.7 | 68.1 | 53.6 | **83.5** |
| WritingBench | 66.9 | 72.2 | 68.5 | **83.4** |
| **Agent** | | | |
| BFCL-v3 | 53.0 | 58.6 | 57.6 | **61.9** |
| TAU1-Retail | 23.5 | 38.3 | 24.3 | **48.7** |
| TAU1-Airline | 14.0 | 18.0 | 16.0 | **32.0** |
| TAU2-Retail | - | 31.6 | 28.1 | **40.4** |
| TAU2-Airline | - | 18.0 | 12.0 | **24.0** |
| TAU2-Telecom | - | **18.4** | 17.5 | 13.2 |
| **Multilingualism** | | | |
| MultiIF | 60.7 | **70.8** | 61.3 | 69.0 |
| MMLU-ProX | 56.2 | **65.1** | 49.6 | 61.6 |
| INCLUDE | 58.6 | **67.8** | 53.8 | 60.1 |
| PolyMATH | 15.6 | 23.3 | 16.6 | **31.1** |
*: For reproducibility, we report the win rates evaluated by GPT-4.1.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-4B-Instruct-2507"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-4B-Instruct-2507 --context-length 262144
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-4B-Instruct-2507 --max-model-len 262144
```
**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-4B-Instruct-2507',
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
elsvastika/blockassist-bc-arctic_soaring_weasel_1754510534
|
elsvastika
| 2025-08-06T20:44:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-06T20:44:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
assoni2002/wav2vec2-jailbreak-classification_new
|
assoni2002
| 2025-08-06T20:41:50Z | 43 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-07-31T20:57:22Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-jailbreak-classification_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-jailbreak-classification_new
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0627
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.562 | 1.0 | 31 | 0.4704 | 0.9693 |
| 0.2247 | 2.0 | 62 | 0.1643 | 0.9959 |
| 0.0949 | 3.0 | 93 | 0.0689 | 0.9980 |
| 0.0591 | 4.0 | 124 | 0.0476 | 0.9959 |
| 0.0381 | 5.0 | 155 | 0.0331 | 0.9980 |
| 0.0333 | 6.0 | 186 | 0.0314 | 0.9980 |
### Framework versions
- Transformers 4.53.3
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.2
|
olegshulyakov/gemma-3n-E4B-it-GGUF
|
olegshulyakov
| 2025-08-06T20:37:00Z | 678 | 0 |
transformers
|
[
"transformers",
"gguf",
"automatic-speech-recognition",
"automatic-speech-translation",
"audio-text-to-text",
"video-text-to-text",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3n-E4B-it",
"base_model:quantized:google/gemma-3n-E4B-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-03T19:06:17Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3n-E4B-it
tags:
- automatic-speech-recognition
- automatic-speech-translation
- audio-text-to-text
- video-text-to-text
- llama-cpp
- gguf-my-repo
---
# gemma-3n-E4B-it-GGUF
**Model creator:** [google](https://huggingface.co/google)<br/>
**Original model**: [google/gemma-3n-E4B-it](https://huggingface.co/google/gemma-3n-E4B-it)<br/>
**GGUF quantization:** provided by [olegshulyakov](https:/huggingface.co/olegshulyakov) using `llama.cpp`<br/>
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Use with Ollama
```bash
ollama run "hf.co/olegshulyakov/gemma-3n-E4B-it-GGUF:Q8_0"
```
## Use with LM Studio
```bash
lms load "olegshulyakov/gemma-3n-E4B-it-GGUF"
```
## Use with llama.cpp CLI
```bash
llama-cli --hf-repo "olegshulyakov/gemma-3n-E4B-it-GGUF" --hf-file "gemma-3n-E4B-it-Q8_0.gguf" -p "The meaning to life and the universe is"
```
## Use with llama.cpp Server:
```bash
llama-server --hf-repo "olegshulyakov/gemma-3n-E4B-it-GGUF" --hf-file "gemma-3n-E4B-it-Q8_0.gguf" -c 4096
```
|
AlleSpezza/T5-xxl-FTS-merged-PostTraining
|
AlleSpezza
| 2025-08-06T20:05:32Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/flan-t5-xxl",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:google/flan-t5-xxl",
"region:us"
] | null | 2025-08-06T20:04:21Z |
---
base_model: google/flan-t5-xxl
library_name: peft
tags:
- base_model:adapter:google/flan-t5-xxl
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
k1000dai/residalact_libero_cosine_scheduler
|
k1000dai
| 2025-08-06T19:54:42Z | 4 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"residualact",
"dataset:k1000dai/libero-addinfo",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T19:54:21Z |
---
datasets: k1000dai/libero-addinfo
library_name: lerobot
license: apache-2.0
model_name: residualact
pipeline_tag: robotics
tags:
- robotics
- lerobot
- residualact
---
# Model Card for residualact
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
seraphimzzzz/578546
|
seraphimzzzz
| 2025-08-06T19:50:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T19:50:03Z |
[View on Civ Archive](https://civitaiarchive.com/models/594069?modelVersionId=663511)
|
seraphimzzzz/347021
|
seraphimzzzz
| 2025-08-06T19:45:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T19:45:21Z |
[View on Civ Archive](https://civitaiarchive.com/models/381329?modelVersionId=425680)
|
seraphimzzzz/238715
|
seraphimzzzz
| 2025-08-06T19:44:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T19:44:28Z |
[View on Civ Archive](https://civitaiarchive.com/models/268365?modelVersionId=302567)
|
anikifoss/GLM-4.5-Air-HQ4_K
|
anikifoss
| 2025-08-06T19:43:31Z | 315 | 0 | null |
[
"gguf",
"conversational",
"text-generation",
"base_model:zai-org/GLM-4.5-Air",
"base_model:quantized:zai-org/GLM-4.5-Air",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T21:07:28Z |
---
quantized_by: anikifoss
pipeline_tag: text-generation
base_model: zai-org/GLM-4.5-Air
license: mit
base_model_relation: quantized
tags:
- conversational
---
# Model Card
High quality quantization of **GLM-4.5-Air** without using imatrix.
# Run
## ik_llama.cpp
```
./build/bin/llama-server \
--alias anikifoss/GLM-4.5-Air-HQ4_K \
--model /mnt/data/Models/anikifoss/GLM-4.5-Air-HQ4_K/GLM-4.5-Air-HQ4_K-00001-of-00002.gguf \
--no-mmap -rtr \
--temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
--ctx-size 116000 \
-ctk f16 -ctv f16 \
-fa \
-b 1024 -ub 1024 \
-fmoe \
--n-gpu-layers 99 \
--override-tensor exps=CPU \
--parallel 1 \
--threads 32 \
--threads-batch 64 \
--host 127.0.0.1 \
--port 8090
```
## llama.cpp
```
./build/bin/llama-server \
--alias anikifoss/GLM-4.5-Air-HQ4_K \
--model /mnt/data/Models/anikifoss/GLM-4.5-Air-HQ4_K/GLM-4.5-Air-HQ4_K-00001-of-00002.gguf \
--no-mmap \
--temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 \
--ctx-size 116000 \
-ctk f16 -ctv f16 \
-fa \
--override-tensor exps=CPU \
-ngl 99 \
--jinja \
--parallel 1 \
--threads 32 \
--threads-batch 64 \
--host 127.0.0.1 \
--port 8090
```
## Quantization Recipe
Quantized with [ik_llama](https://github.com/ikawrakow/ik_llama.cpp), but should work with any GGUF compatible inference framework.
```bash
#!/usr/bin/env bash
custom="
blk\.0\.ffn_down\.weight=q8_0
blk\.0\.ffn_gate\.weight=q8_0
blk\.0\.ffn_up\.weight=q8_0
blk\.46\.nextn\.eh_proj\.weight=bf16
blk\.46\.nextn\.embed_tokens\.weight=q8_0
blk\.46\.nextn\.enorm\.weight=f32
blk\.46\.nextn\.hnorm\.weight=f32
blk\.46\.nextn\.shared_head_head\.weight=q8_0
blk\.46\.nextn\.shared_head_norm\.weight=f32
blk\.[0-9]\.attn_k\.bias=f32
blk\.[0-9]\.attn_k\.weight=q8_0
blk\.[0-9]\.attn_norm\.weight=f32
blk\.[0-9]\.attn_output\.weight=q8_0
blk\.[0-9]\.attn_q\.bias=f32
blk\.[0-9]\.attn_q\.weight=q8_0
blk\.[0-9]\.attn_v\.bias=f32
blk\.[0-9]\.attn_v\.weight=q8_0
blk\.[0-9]\.post_attention_norm\.weight=f32
blk\.[1-3][0-9]\.attn_k\.bias=f32
blk\.[1-3][0-9]\.attn_k\.weight=q8_0
blk\.[1-3][0-9]\.attn_norm\.weight=f32
blk\.[1-3][0-9]\.attn_output\.weight=q8_0
blk\.[1-3][0-9]\.attn_q\.bias=f32
blk\.[1-3][0-9]\.attn_q\.weight=q8_0
blk\.[1-3][0-9]\.attn_v\.bias=f32
blk\.[1-3][0-9]\.attn_v\.weight=q8_0
blk\.[1-3][0-9]\.post_attention_norm\.weight=f32
blk\.4[0-6]\.attn_k\.bias=f32
blk\.4[0-6]\.attn_k\.weight=q8_0
blk\.4[0-6]\.attn_norm\.weight=f32
blk\.4[0-6]\.attn_output\.weight=q8_0
blk\.4[0-6]\.attn_q\.bias=f32
blk\.4[0-6]\.attn_q\.weight=q8_0
blk\.4[0-6]\.attn_v\.bias=f32
blk\.4[0-6]\.attn_v\.weight=q8_0
blk\.4[0-6]\.post_attention_norm\.weight=f32
blk\.[1-9]\.exp_probs_b\.bias=f32
blk\.[1-9]\.ffn_down_exps\.weight=q8_0
blk\.[1-9]\.ffn_down_shexp\.weight=bf16
blk\.[1-9]\.ffn_gate_exps\.weight=q4_K
blk\.[1-9]\.ffn_gate_inp\.weight=f32
blk\.[1-9]\.ffn_gate_shexp\.weight=bf16
blk\.[1-9]\.ffn_up_exps\.weight=q4_K
blk\.[1-9]\.ffn_up_shexp\.weight=bf16
blk\.[1-3][0-9]\.exp_probs_b\.bias=f32
blk\.[1-3][0-9]\.ffn_down_exps\.weight=q8_0
blk\.[1-3][0-9]\.ffn_down_shexp\.weight=bf16
blk\.[1-3][0-9]\.ffn_gate_exps\.weight=q4_K
blk\.[1-3][0-9]\.ffn_gate_inp\.weight=f32
blk\.[1-3][0-9]\.ffn_gate_shexp\.weight=bf16
blk\.[1-3][0-9]\.ffn_up_exps\.weight=q4_K
blk\.[1-3][0-9]\.ffn_up_shexp\.weight=bf16
blk\.4[0-6]\.exp_probs_b\.bias=f32
blk\.4[0-6]\.ffn_down_exps\.weight=q8_0
blk\.4[0-6]\.ffn_down_shexp\.weight=bf16
blk\.4[0-6]\.ffn_gate_exps\.weight=q4_K
blk\.4[0-6]\.ffn_gate_inp\.weight=f32
blk\.4[0-6]\.ffn_gate_shexp\.weight=bf16
blk\.4[0-6]\.ffn_up_exps\.weight=q4_K
blk\.4[0-6]\.ffn_up_shexp\.weight=bf16
output\.weight=bf16
output_norm\.weight=f32
token_embd\.weight=bf16
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
echo "Running with: -custom-q $custom"
TARGET_MODEL="GLM-4.5-Air-HQ4_K"
mkdir -p ~/Env/models/anikifoss/$TARGET_MODEL
./build/bin/llama-quantize \
--custom-q "$custom" \
/mnt/data/Models/zai-org/GLM-4.5-Air-GGUF/GLM-4.5-Air-128x9.4B-BF16-00001-of-00005.gguf \
~/Env/models/anikifoss/$TARGET_MODEL/$TARGET_MODEL.gguf \
Q4_K \
32
```
|
jmnlcruz-c3ai/code-search-net-tokenizer
|
jmnlcruz-c3ai
| 2025-08-06T19:41:32Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T19:41:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seraphimzzzz/367549
|
seraphimzzzz
| 2025-08-06T19:40:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T19:40:22Z |
[View on Civ Archive](https://civitaiarchive.com/models/401431?modelVersionId=447607)
|
seraphimzzzz/169515
|
seraphimzzzz
| 2025-08-06T19:37:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T19:36:58Z |
[View on Civ Archive](https://civitaiarchive.com/models/195697?modelVersionId=219993)
|
schonsense/Cream_top2_gguf
|
schonsense
| 2025-08-06T18:38:47Z | 250 | 1 | null |
[
"gguf",
"base_model:schonsense/ll3_3_70B_r128_VAR2",
"base_model:quantized:schonsense/ll3_3_70B_r128_VAR2",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T17:58:51Z |
---
base_model:
- schonsense/ll3_3_70B_r128_VAR2
---
Second iteration (first was the hottest trash) of mass injecting the good stuff into my spatial awareness/object orientation framework. VAR2 was trained on mixed data, no RP, and VAR(1) was trained exclusively on spatial/task data.
```
merge_method: breadcrumbs_ties
models:
- model: Delta-Vector/Austral-70B-Winton
parameters:
gamma: 0.01
density: .2
weight: 0.13
- model: Delta-Vector/Shimamura-70B
parameters:
gamma: 0.01
density: .2
weight: 0.13
- model: Darkhn/L3.3-70B-Animus-V7.0
parameters:
gamma: 0.01
density: .5
weight: 0.13
- model: TheDrummer/Anubis-70B-v1.1
parameters:
gamma: 0.02
density: .3
weight: 0.13
- model: schonsense/Llama3_3_70B_VAR_r128
parameters:
gamma: 0
density: .7
weight: 0.13
- model: SentientAGI/Dobby-Unhinged-Llama-3.3-70B
parameters:
gamma: 0.01
density: .3
weight: 0.13
- model: Tarek07/Scripturient-V1.3-LLaMa-70B
parameters:
gamma: 0.01
density: .3
weight: 0.13
- model: zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B
parameters:
gamma: 0.02
density: .2
weight: 0.13
- model: schonsense/ll3_3_70B_r128_VAR2
base_model: schonsense/ll3_3_70B_r128_VAR2
tokenizer_source: union
parameters:
normalize: true
int8_mask: true
lambda: 0.95
dtype: float32
out_dtype: bfloat16
```
|
ShukraJaliya/BLUECOMPUTER.2
|
ShukraJaliya
| 2025-08-06T18:31:32Z | 12 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-06T18:21:39Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- chat
- mlx
library_name: mlx
---
|
danuphat/typhoon-ocr-3b-trl-sft-that-sub-v1-5
|
danuphat
| 2025-08-06T18:21:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:scb10x/typhoon-ocr-7b",
"base_model:finetune:scb10x/typhoon-ocr-7b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T20:00:08Z |
---
base_model: scb10x/typhoon-ocr-7b
library_name: transformers
model_name: typhoon-ocr-3b-trl-sft-that-sub-v1-5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for typhoon-ocr-3b-trl-sft-that-sub-v1-5
This model is a fine-tuned version of [scb10x/typhoon-ocr-7b](https://huggingface.co/scb10x/typhoon-ocr-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="danuphat/typhoon-ocr-3b-trl-sft-that-sub-v1-5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/danuphat-l-kasetsart-university/typhoon-ocr-3b-trl-sft-ocr/runs/vm4z4k6d)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.56.0.dev0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
elsvastika/blockassist-bc-arctic_soaring_weasel_1754501017
|
elsvastika
| 2025-08-06T18:01:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-06T18:01:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sidemix/lexy-brazil
|
sidemix
| 2025-08-06T18:00:34Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-06T18:00:25Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/lexy_003000_00_20250806175800.png
text: Lexy taking a mirror selfie in her bedroom wearing a bright yellow halter
bikini, holding her black iPhone case in one hand and smiling confidently
- output:
url: sample/lexy_003000_01_20250806175806.png
text: Lexy standing in her bedroom in a red strappy bikini, striking a flirty
pose for the camera with one hand on her hip and tousled hair
- output:
url: sample/lexy_003000_02_20250806175811.png
text: Lexy snapping a selfie with her black-cased iPhone in a baby blue bandeau
bikini, her other hand resting lightly on her waist
- output:
url: sample/lexy_003000_03_20250806175817.png
text: Lexy posing in front of her bedroom mirror in a hot pink string bikini,
slightly arching her back with a sultry gaze
- output:
url: sample/lexy_003000_04_20250806175822.png
text: Lexy in a green ribbed bikini with gold accents, standing barefoot on her
bedroom carpet and softly smiling at the camera
- output:
url: sample/lexy_003000_05_20250806175828.png
text: Lexy wearing a white lace-trimmed bikini in her bedroom, holding her phone
in front of her face as she captures a mirror selfie with soft window light
- output:
url: sample/lexy_003000_06_20250806175834.png
text: Lexy in a metallic silver bikini standing confidently in her bedroom, both
hands on her hips and looking directly into the lens
- output:
url: sample/lexy_003000_07_20250806175839.png
text: Lexy taking a selfie in a light lavender bikini, holding her black iPhone
up with one hand and gently tugging on her bikini strap with the other
- output:
url: sample/lexy_003000_08_20250806175845.png
text: Lexy posing near her bedroom door in a zebra print bikini, glancing sideways
at the camera with a teasing smirk
- output:
url: sample/lexy_003000_09_20250806175850.png
text: Lexy in a pastel orange bikini with string ties, holding her iPhone up and
angling her body slightly for a flattering selfie
- output:
url: sample/lexy_003000_10_20250806175856.png
text: Lexy wearing a floral bikini set in her bedroom, one hand behind her head
and the other holding the phone, soft morning light glowing in
- output:
url: sample/lexy_003000_11_20250806175902.png
text: Lexy standing in her bedroom in a navy blue sporty bikini, hands by her
side with a strong and confident expression
- output:
url: sample/lexy_003000_12_20250806175907.png
text: Lexy posing in a black and white high-waisted bikini set, her phone held
up in front of her chest, capturing the full reflection in the mirror
- output:
url: sample/lexy_003000_13_20250806175913.png
text: Lexy in a red and white checkered bikini, standing with her legs crossed
and smiling warmly at the camera in her sunlit bedroom
- output:
url: sample/lexy_003000_14_20250806175918.png
text: Lexy wearing a turquoise triangle bikini, leaning slightly toward the mirror
while taking a close-up selfie with her black phone
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Lexy
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Lexy
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Lexy` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Jacksss123/net72_uid192
|
Jacksss123
| 2025-08-06T17:30:44Z | 1 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-06T17:26:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FrontierInstruments/merged_softstart_reasoning_r64a128_1epochs
|
FrontierInstruments
| 2025-08-06T17:17:50Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T17:14:50Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** FrontierInstruments
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tensorplex-labs/gssgemma-3n-E4B-it_dpo
|
tensorplex-labs
| 2025-08-06T16:50:18Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-08-06T16:49:24Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cpatonn/Qwen3-4B-Instruct-2507-AWQ
|
cpatonn
| 2025-08-06T16:50:03Z | 214 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-06T16:25:25Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-4B-Instruct-2507
---
# Qwen3-4B-Instruct-2507-AWQ
## Method
Quantised using [vllm-project/llm-compressor](https://github.com/vllm-project/llm-compressor.git), [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset) and the following configs:
```
recipe = [
AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]),
]
```
# Qwen3-4B-Instruct-2507
<a href="https://chat.qwen.ai" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Highlights
We introduce the updated version of the **Qwen3-4B non-thinking mode**, named **Qwen3-4B-Instruct-2507**, featuring the following key enhancements:
- **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**.
- **Substantial gains** in long-tail knowledge coverage across **multiple languages**.
- **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation.
- **Enhanced capabilities** in **256K long-context understanding**.

## Model Overview
**Qwen3-4B-Instruct-2507** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 4.0B
- Number of Paramaters (Non-Embedding): 3.6B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: **262,144 natively**.
**NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Performance
| | GPT-4.1-nano-2025-04-14 | Qwen3-30B-A3B Non-Thinking | Qwen3-4B Non-Thinking | Qwen3-4B-Instruct-2507 |
|--- | --- | --- | --- | --- |
| **Knowledge** | | | |
| MMLU-Pro | 62.8 | 69.1 | 58.0 | **69.6** |
| MMLU-Redux | 80.2 | 84.1 | 77.3 | **84.2** |
| GPQA | 50.3 | 54.8 | 41.7 | **62.0** |
| SuperGPQA | 32.2 | 42.2 | 32.0 | **42.8** |
| **Reasoning** | | | |
| AIME25 | 22.7 | 21.6 | 19.1 | **47.4** |
| HMMT25 | 9.7 | 12.0 | 12.1 | **31.0** |
| ZebraLogic | 14.8 | 33.2 | 35.2 | **80.2** |
| LiveBench 20241125 | 41.5 | 59.4 | 48.4 | **63.0** |
| **Coding** | | | |
| LiveCodeBench v6 (25.02-25.05) | 31.5 | 29.0 | 26.4 | **35.1** |
| MultiPL-E | 76.3 | 74.6 | 66.6 | **76.8** |
| Aider-Polyglot | 9.8 | **24.4** | 13.8 | 12.9 |
| **Alignment** | | | |
| IFEval | 74.5 | **83.7** | 81.2 | 83.4 |
| Arena-Hard v2* | 15.9 | 24.8 | 9.5 | **43.4** |
| Creative Writing v3 | 72.7 | 68.1 | 53.6 | **83.5** |
| WritingBench | 66.9 | 72.2 | 68.5 | **83.4** |
| **Agent** | | | |
| BFCL-v3 | 53.0 | 58.6 | 57.6 | **61.9** |
| TAU1-Retail | 23.5 | 38.3 | 24.3 | **48.7** |
| TAU1-Airline | 14.0 | 18.0 | 16.0 | **32.0** |
| TAU2-Retail | - | 31.6 | 28.1 | **40.4** |
| TAU2-Airline | - | 18.0 | 12.0 | **24.0** |
| TAU2-Telecom | - | **18.4** | 17.5 | 13.2 |
| **Multilingualism** | | | |
| MultiIF | 60.7 | **70.8** | 61.3 | 69.0 |
| MMLU-ProX | 56.2 | **65.1** | 49.6 | 61.6 |
| INCLUDE | 58.6 | **67.8** | 53.8 | 60.1 |
| PolyMATH | 15.6 | 23.3 | 16.6 | **31.1** |
*: For reproducibility, we report the win rates evaluated by GPT-4.1.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-4B-Instruct-2507"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-4B-Instruct-2507 --context-length 262144
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-4B-Instruct-2507 --max-model-len 262144
```
**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-4B-Instruct-2507',
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF
|
bartowski
| 2025-08-06T16:36:56Z | 3,538 | 4 | null |
[
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-06T15:35:47Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model_relation: quantized
base_model: Qwen/Qwen3-4B-Thinking-2507
---
## Llamacpp imatrix Quantizations of Qwen3-4B-Thinking-2507 by Qwen
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b6096">b6096</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
<think>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Qwen3-4B-Thinking-2507-bf16.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-bf16.gguf) | bf16 | 8.05GB | false | Full BF16 weights. |
| [Qwen3-4B-Thinking-2507-Q8_0.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q8_0.gguf) | Q8_0 | 4.28GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Qwen3-4B-Thinking-2507-Q6_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q6_K_L.gguf) | Q6_K_L | 3.40GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q6_K.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q6_K.gguf) | Q6_K | 3.31GB | false | Very high quality, near perfect, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q5_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q5_K_L.gguf) | Q5_K_L | 2.98GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q5_K_M.gguf) | Q5_K_M | 2.89GB | false | High quality, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q5_K_S.gguf) | Q5_K_S | 2.82GB | false | High quality, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q4_1.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q4_1.gguf) | Q4_1 | 2.60GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Qwen3-4B-Thinking-2507-Q4_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q4_K_L.gguf) | Q4_K_L | 2.59GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q4_K_M.gguf) | Q4_K_M | 2.50GB | false | Good quality, default size for most use cases, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q4_K_S.gguf) | Q4_K_S | 2.38GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q4_0.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q4_0.gguf) | Q4_0 | 2.38GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Qwen3-4B-Thinking-2507-IQ4_NL.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-IQ4_NL.gguf) | IQ4_NL | 2.38GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Qwen3-4B-Thinking-2507-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q3_K_XL.gguf) | Q3_K_XL | 2.33GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Qwen3-4B-Thinking-2507-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-IQ4_XS.gguf) | IQ4_XS | 2.27GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Qwen3-4B-Thinking-2507-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q3_K_L.gguf) | Q3_K_L | 2.24GB | false | Lower quality but usable, good for low RAM availability. |
| [Qwen3-4B-Thinking-2507-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q3_K_M.gguf) | Q3_K_M | 2.08GB | false | Low quality. |
| [Qwen3-4B-Thinking-2507-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-IQ3_M.gguf) | IQ3_M | 1.96GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Qwen3-4B-Thinking-2507-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q3_K_S.gguf) | Q3_K_S | 1.89GB | false | Low quality, not recommended. |
| [Qwen3-4B-Thinking-2507-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-IQ3_XS.gguf) | IQ3_XS | 1.81GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Qwen3-4B-Thinking-2507-Q2_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q2_K_L.gguf) | Q2_K_L | 1.76GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Qwen3-4B-Thinking-2507-IQ3_XXS.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-IQ3_XXS.gguf) | IQ3_XXS | 1.67GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Qwen3-4B-Thinking-2507-Q2_K.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-Q2_K.gguf) | Q2_K | 1.67GB | false | Very low quality but surprisingly usable. |
| [Qwen3-4B-Thinking-2507-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF/blob/main/Qwen_Qwen3-4B-Thinking-2507-IQ2_M.gguf) | IQ2_M | 1.51GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF --include "Qwen_Qwen3-4B-Thinking-2507-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Qwen_Qwen3-4B-Thinking-2507-GGUF --include "Qwen_Qwen3-4B-Thinking-2507-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Qwen_Qwen3-4B-Thinking-2507-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
pharaohe/dwarfblondesianfacelora
|
pharaohe
| 2025-08-06T15:59:45Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-06T15:59:13Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: woman
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# dwarfblondesianfacelora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `woman` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
nasywaanaa/large-v3-rra-id-6aug
|
nasywaanaa
| 2025-08-06T15:51:15Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"id",
"dataset:stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-6aug",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-06T15:23:01Z |
---
library_name: transformers
language:
- id
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-6aug
model-index:
- name: Whisper Large v3 - 1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v3 - 1.0
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the stt-project-rra-v2/golden-dataset-2.0-tvt-muffled-6aug dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
|
c-ho/2025-08-06-bll-ner_bert-base-multilingual-cased-ner-hrl_classweights_i10x_coumpound_n2-5
|
c-ho
| 2025-08-06T15:44:27Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-06T15:36:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rmdhirr/suja-lorab-restart4-c-suja-1000
|
rmdhirr
| 2025-08-06T15:34:56Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-06T15:33:52Z |
---
base_model: unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
abarlu/donut
|
abarlu
| 2025-08-06T15:33:16Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-05T15:06:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekiprop/SST-2-FULL_FT-seed30
|
ekiprop
| 2025-08-06T15:33:06Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-06T15:07:11Z |
---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SST-2-FULL_FT-seed30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-FULL_FT-seed30
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1795
- Accuracy: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.4015 | 0.0950 | 200 | 0.2626 | 0.8979 |
| 0.3095 | 0.1900 | 400 | 0.2196 | 0.9278 |
| 0.2698 | 0.2850 | 600 | 0.2433 | 0.9163 |
| 0.2418 | 0.3800 | 800 | 0.1982 | 0.9404 |
| 0.2302 | 0.4751 | 1000 | 0.3101 | 0.8968 |
| 0.2271 | 0.5701 | 1200 | 0.2355 | 0.9300 |
| 0.2124 | 0.6651 | 1400 | 0.1944 | 0.9300 |
| 0.2067 | 0.7601 | 1600 | 0.2010 | 0.9415 |
| 0.2054 | 0.8551 | 1800 | 0.1795 | 0.9438 |
| 0.1918 | 0.9501 | 2000 | 0.1988 | 0.9381 |
| 0.1712 | 1.0451 | 2200 | 0.1969 | 0.9335 |
| 0.1421 | 1.1401 | 2400 | 0.1943 | 0.9392 |
| 0.1511 | 1.2352 | 2600 | 0.2512 | 0.9323 |
| 0.1511 | 1.3302 | 2800 | 0.2293 | 0.9335 |
| 0.1461 | 1.4252 | 3000 | 0.2454 | 0.9323 |
| 0.1433 | 1.5202 | 3200 | 0.2441 | 0.9346 |
| 0.1591 | 1.6152 | 3400 | 0.2179 | 0.9289 |
| 0.138 | 1.7102 | 3600 | 0.3245 | 0.9060 |
| 0.1382 | 1.8052 | 3800 | 0.2524 | 0.9323 |
| 0.1541 | 1.9002 | 4000 | 0.2077 | 0.9278 |
| 0.1335 | 1.9952 | 4200 | 0.2670 | 0.9312 |
| 0.1099 | 2.0903 | 4400 | 0.2445 | 0.9312 |
| 0.1088 | 2.1853 | 4600 | 0.2541 | 0.9300 |
| 0.1117 | 2.2803 | 4800 | 0.3141 | 0.9197 |
| 0.1052 | 2.3753 | 5000 | 0.2953 | 0.9220 |
| 0.1123 | 2.4703 | 5200 | 0.2794 | 0.9266 |
| 0.1035 | 2.5653 | 5400 | 0.2783 | 0.9300 |
| 0.1173 | 2.6603 | 5600 | 0.2436 | 0.9346 |
| 0.1005 | 2.7553 | 5800 | 0.2554 | 0.9346 |
| 0.1107 | 2.8504 | 6000 | 0.2594 | 0.9266 |
| 0.0981 | 2.9454 | 6200 | 0.2906 | 0.9312 |
| 0.0965 | 3.0404 | 6400 | 0.3357 | 0.9312 |
| 0.0812 | 3.1354 | 6600 | 0.2544 | 0.9438 |
| 0.0848 | 3.2304 | 6800 | 0.2733 | 0.9392 |
| 0.0891 | 3.3254 | 7000 | 0.2623 | 0.9312 |
| 0.075 | 3.4204 | 7200 | 0.3035 | 0.9381 |
| 0.0791 | 3.5154 | 7400 | 0.2715 | 0.9404 |
| 0.0785 | 3.6105 | 7600 | 0.2622 | 0.9392 |
| 0.082 | 3.7055 | 7800 | 0.2274 | 0.9392 |
| 0.0764 | 3.8005 | 8000 | 0.2828 | 0.9369 |
| 0.0795 | 3.8955 | 8200 | 0.2644 | 0.9381 |
| 0.0836 | 3.9905 | 8400 | 0.2614 | 0.9369 |
| 0.0612 | 4.0855 | 8600 | 0.3463 | 0.9220 |
| 0.0488 | 4.1805 | 8800 | 0.3500 | 0.9335 |
| 0.0574 | 4.2755 | 9000 | 0.3381 | 0.9300 |
| 0.0684 | 4.3705 | 9200 | 0.3019 | 0.9358 |
| 0.0629 | 4.4656 | 9400 | 0.2993 | 0.9323 |
| 0.0539 | 4.5606 | 9600 | 0.3095 | 0.9369 |
| 0.067 | 4.6556 | 9800 | 0.2966 | 0.9381 |
| 0.0573 | 4.7506 | 10000 | 0.2836 | 0.9415 |
| 0.0567 | 4.8456 | 10200 | 0.3004 | 0.9346 |
| 0.0623 | 4.9406 | 10400 | 0.2936 | 0.9381 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-5bit
|
lmstudio-community
| 2025-08-06T15:23:11Z | 76 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"region:us"
] |
text-generation
| 2025-08-06T15:22:30Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-4B-Thinking-2507
---
## 💫 Community Model> Qwen3-4B-Thinking-2507 by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)<br>
**MLX quantization**: provided by [LM Studio team](https://x.com/lmstudio) using [mlx_lm](https://github.com/ml-explore/mlx-lm)<br>
## Technical Details
5-bit quantized version of Qwen3-4B-Thinking-2507 using MLX, optimized for Apple Silicon.
## Special thanks
🙏 Special thanks to the [Apple Machine Learning Research](https://github.com/ml-explore) team for creating [MLX](https://github.com/ml-explore/mlx).
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
loweegee/Reinforce-helicopter
|
loweegee
| 2025-08-06T15:01:13Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-07-30T16:25:19Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-helicopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 14.60 +/- 11.05
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mradermacher/II-Search-CIR-4B-i1-GGUF
|
mradermacher
| 2025-08-06T15:00:21Z | 194 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Intelligent-Internet/II-Search-CIR-4B",
"base_model:quantized:Intelligent-Internet/II-Search-CIR-4B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-06T12:58:25Z |
---
base_model: Intelligent-Internet/II-Search-CIR-4B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Intelligent-Internet/II-Search-CIR-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#II-Search-CIR-4B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF/resolve/main/II-Search-CIR-4B.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/II-Search-CIR-4B-GGUF
|
mradermacher
| 2025-08-06T15:00:10Z | 76 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Intelligent-Internet/II-Search-CIR-4B",
"base_model:quantized:Intelligent-Internet/II-Search-CIR-4B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T11:09:54Z |
---
base_model: Intelligent-Internet/II-Search-CIR-4B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Intelligent-Internet/II-Search-CIR-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#II-Search-CIR-4B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/II-Search-CIR-4B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/II-Search-CIR-4B-GGUF/resolve/main/II-Search-CIR-4B.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
chenqi1126/Llasa_ckpts
|
chenqi1126
| 2025-08-06T14:52:58Z | 7 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-07-18T08:25:31Z |
---
license: apache-2.0
---
|
optimum-internal-testing/tiny-random-llava-next-mistral
|
optimum-internal-testing
| 2025-08-06T14:28:34Z | 583 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-06T14:01:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Charcotta/freyja
|
Charcotta
| 2025-08-06T14:21:01Z | 9 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-06T13:56:54Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: freyja
---
# Freyja
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `freyja` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "freyja",
"lora_weights": "https://huggingface.co/Charcotta/freyja/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Charcotta/freyja', weight_name='lora.safetensors')
image = pipeline('freyja').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Charcotta/freyja/discussions) to add images that show off what you’ve made with this LoRA.
|
kerrlc/apicalling
|
kerrlc
| 2025-08-06T14:13:27Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T11:14:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Forceless/PPTAgent-coder-3B
|
Forceless
| 2025-08-06T13:16:42Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T10:40:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sobs0/new_wav2vec2-base-aphasia-oth
|
sobs0
| 2025-08-06T12:35:27Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T11:38:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Butanium/simple-stories-0L8H512D-attention-only-toy-transformer
|
Butanium
| 2025-08-06T12:05:46Z | 3 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-06T12:05:43Z |
# 0-Layer 8-Head Attention-Only Transformer
This is a simplified transformer model with 0 attention layer(s) and 8 attention head(s), hidden size 512, designed for studying attention mechanisms in isolation.
## Architecture Differences from Vanilla Transformer
**Removed Components:**
- **No MLP/Feed-Forward layers** - Only attention layers
- **No Layer Normalization** - No LayerNorm before/after attention
- **No positional encoding** - No position embeddings of any kind
**Kept Components:**
- Token embeddings
- Multi-head self-attention with causal masking
- Residual connections around attention layers
- Language modeling head (linear projection to vocabulary)
This minimal architecture isolates the attention mechanism, making it useful for mechanistic interpretability research as described in [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html).
## Usage
```python
class AttentionOnlyTransformer(PreTrainedModel):
"""Attention-only transformer with configurable number of attention layers."""
config_class = LlamaConfig
def __init__(self, config: LlamaConfig):
super().__init__(config)
self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size)
self.layers = nn.ModuleList([AttentionLayer(config) for _ in range(config.num_hidden_layers)])
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
def forward(self, input_ids=None, attention_mask=None, labels=None, **kwargs):
batch_size, seq_len = input_ids.shape
hidden_states = self.embed_tokens(input_ids)
assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)
assert attention_mask.shape == (batch_size, seq_len)
for layer in self.layers:
hidden_states = layer(hidden_states, attention_mask)
assert hidden_states.shape == (batch_size, seq_len, self.config.hidden_size)
logits = self.lm_head(hidden_states)
assert logits.shape == (batch_size, seq_len, self.config.vocab_size)
loss = None
if labels is not None:
shift_logits = logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(
shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
)
return {"loss": loss, "logits": logits}
model = AttentionOnlyTransformer.from_pretrained('Butanium/simple-stories-0L8H512D-attention-only-toy-transformer')
```
## Training Data
The model is trained on the [SimpleStories dataset](https://huggingface.co/datasets/SimpleStories/SimpleStories) for next-token prediction.
|
tyanfarm/gemma-3n-hotel-faq-conversations-adapters
|
tyanfarm
| 2025-08-06T11:46:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T10:38:41Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tamewild/4b_v37_merged_e8
|
tamewild
| 2025-08-06T11:40:38Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T11:38:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SzymonKozl/setfit-app-class-netto-balanced-v1
|
SzymonKozl
| 2025-08-06T11:33:25Z | 6 | 0 |
setfit
|
[
"setfit",
"safetensors",
"new",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"custom_code",
"arxiv:2209.11055",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"region:us"
] |
text-classification
| 2025-08-06T11:33:14Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 23 [TextView] | 1 [TextView] | Frankfurt am Main [TextView] | Melibocusstr.
52 [TextView] | Ändern [TextView] | Neues für Dich [TextView] | Alle anzeigen
[TextView] | Jetzt mitmachen! [TextView] | Sparen leicht gemacht! [TextView] |
Deine Funktionen [TextView] | 0 [TextView] | ºPunkte [TextView] | Digitale Kassenbons
[TextView] | Code einlösen [TextView] | Corvette-Verlosung [TextView] | Online-Shop
Coupons [TextView] | Rezepte [TextView] | Alle anzeigen [TextView] | Cheeseburger
Muffins [TextView] | Nudelsalat Klassiker [TextView] | Mein Konto [TextView] |
Mein PAYBACK [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons
[TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView] | Cheeseburger
Muffins [TextView] | Nudelsalat Klassiker [TextView]
- text: "23 [TextView] | 1 [TextView] | Coupons [TextView] | 1 [TextView] | Meine\
\ Coupons (17) [TextView] | App-Angebote (57) [TextView] | Filiale [TextView]\
\ | gültig bis\n 27.07.2025 [TextView] | 10fach °P [TextView] | auf den Einkauf\
\ in der Filiale!* [TextView] | Aktiv [TextView] | OnlineShop [TextView] | gültig\
\ bis\n 28.07.2025 [TextView] | 10% [TextView] | Rabatt [TextView] | auf Gutscheingold\
\ Restaurant & City Geschenkcode [TextView] | Zum Online-Shop [TextView] | Online\
\ [TextView] | Online [TextView] | Frankfurt am Main [TextView] | Melibocusstr.\
\ 52 [TextView] | Ändern [TextView] | Startseite [TextView] | Angebote [TextView]\
\ | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView] |\
\ Coupons [TextView] | 1 [TextView] | Meine Coupons (17) [TextView] | Coupons\
\ [TextView] | 1 [TextView] | Meine Coupons (17) [TextView] | App-Angebote (57)\
\ [TextView] | gültig bis\n 27.07.2025 [TextView] | auf ausgewählte SUPs [TextView]\
\ | Zum Online-Shop [TextView] | gültig bis\n 03.08.2025 [TextView] | 40€ [TextView]\
\ | App-Rabatt [TextView] | auf Rolektro E-Joy [TextView] | Zum Online-Shop [TextView]\
\ | Online [TextView] | gültig bis\n 28.07.2025 [TextView] | 100€ [TextView] |\
\ Rabatt [TextView] | auf ausgewählte Mobilitätsartikel [TextView] | Zum Online-Shop\
\ [TextView] | Online [TextView] | 12€ [TextView] | Online [TextView] | Frankfurt\
\ am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Meine\
\ Coupons (17) [TextView] | App-Angebote (57) [TextView] | gültig bis\n 03.08.2025\
\ [TextView] | 40€ [TextView] | App-Rabatt [TextView] | auf Rolektro E-Joy [TextView]\
\ | Zum Online-Shop [TextView] | gültig bis\n 28.07.2025 [TextView] | 100€ [TextView]\
\ | Rabatt [TextView] | auf ausgewählte Mobilitätsartikel [TextView] | Zum Online-Shop\
\ [TextView] | Online [TextView] | gültig bis\n 27.07.2025 [TextView] | 12€ [TextView]\
\ | App-Rabatt [TextView] | auf Veska Balkonkraftwerk [TextView] | Zum Online-Shop\
\ [TextView] | Online [TextView]"
- text: "23 [TextView] | 1 [TextView] | Angebote [TextView] | Frankfurt am Main [TextView]\
\ | Melibocusstr. 52 [TextView] | Ändern [TextView] | VitaminWell [TextView] |\
\ 0,5 l [TextView] | 3.38 / l [TextView] | versch. Sorten [TextView] | Aktion\
\ [TextView] | 1.69* [TextView] | Madeleines [TextView] | 250 g [TextView] | 7.16\
\ / kg [TextView] | versch. Sorten\nNetto plus App-Preis: 1.49 (5.96 / kg)\nGewährung\
\ eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons.\
\ Weitere Infos im Bereich \"Coupons\" der Netto plus App. [TextView] | Aktion\
\ [TextView] | 1.79* [TextView] | kinder Cards Family Pack [TextView] | 256 g\
\ [TextView] | 15.59 / kg [TextView] | Netto plus App-Preis: 3.69 (14.41 / kg)\n\
Gewährung eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons.\
\ Weitere Infos im Bereich \"Coupons\" der Netto plus App. [TextView] | -13% [TextView]\
\ | UVP 4.59 [TextView] | 3.99* [TextView] | Durchsuche unsere Angebote ... [TextView]\
\ | Knüller der Woche [TextView] | Getränke [TextView] | Appxklusive Angebote\
\ [TextView] | Obst & Gemüse [TextView] | Pflanzen [TextView] | Fleisch & Wurst\
\ [TextView] | Sparangebote [TextView] | Payback°Punkte [TextView] | Weine & Spirituosen\
\ [TextView] | Backstube [TextView] | TV-Werbung [TextView] | Drogerie [TextView]\
\ | Haushalt, Freizeit & Co. [TextView] | Super Wochenende [TextView] | Wraps\
\ & Tacos [TextView] | Handwerk & Co. [TextView] | Netto-Tag [TextView] | Samstagskracher\
\ [TextView] | Wohnen & Design [TextView] | Garten & Balkon [TextView] | Sport\
\ & Fitness [TextView] | Schönheit &... [TextView] | Küche & Haushalt [TextView]\
\ | Mode [TextView] | Haus & Heimwerken [TextView] | Reinigungsmittel... [TextView]\
\ | Auto & Zweirad [TextView] | Technik & Multimedia [TextView] | Gesundheit &...\
\ [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView]\
\ | Online-Shop [TextView] | Karte + Zahlen [TextView] | VitaminWell [TextView]\
\ | 0,5 l [TextView] | 3.38 / l [TextView] | versch. Sorten [TextView] | Aktion\
\ [TextView] | 1.69* [TextView] | Madeleines [TextView] | 250 g [TextView] | 7.16\
\ / kg [TextView] | versch. Sorten\nNetto plus App-Preis: 1.49 (5.96 / kg)\nGewährung\
\ eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons.\
\ Weitere Infos im Bereich \"Coupons\" der Netto plus App. [TextView] | Aktion\
\ [TextView] | 1.79* [TextView] | kinder Cards Family Pack [TextView] | 256 g\
\ [TextView] | 15.59 / kg [TextView] | Netto plus App-Preis: 3.69 (14.41 / kg)\n\
Gewährung eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons.\
\ Weitere Infos im Bereich \"Coupons\" der Netto plus App. [TextView] | -13% [TextView]\
\ | UVP 4.59 [TextView] | 3.99* [TextView] | Softlan Weichspüler [TextView] |\
\ 650 ml – 1 l [TextView] | 1.89 - 2.91 / l [TextView] | versch. Sorten\nNetto\
\ plus App-Preis: 0.99 (0.99 - 1.52 / l)\nGewährung eines Netto plus App-Preises\
\ nur bei Einlösung des zugehörigen App-Coupons. Weitere Infos im Bereich \"Coupons\"\
\ der Netto plus App.\nmax. Abgabemenge App Preis: 4 Stück [TextView] | Aktion\
\ [TextView] | 1.89* [TextView] | Delverde Pasta [TextView] | 1 kg [TextView]\
\ | versch. Ausformungen\nNetto plus App-Preis: 1.49 \nGewährung eines Netto plus\
\ App-Preises nur bei Einlösung des zugehörigen App-Coupons. Weitere Infos im\
\ Bereich \"Coupons\" der Netto plus App. [TextView] | -44% [TextView] | UVP 3.59\
\ [TextView] | 1.99* [TextView] | Knüller der Woche [TextView] | Getränke [TextView]"
- text: 'Melibocusstr. 52 [TextView] | 60528 Frankfurt am Main [TextView] | Ändern
[TextView] | Karte + Zahlen [TextView] | Nur Karte [TextView] | Code gültig: 5:00
Min. [TextView] | Zahlung: [TextView] | PayPal [TextView] | Code: [TextView] |
PAYBACK °Punkte [TextView] | Coupons aktiv [TextView] | 0 von 0 [TextView] | Digitaler
Kassenbon [TextView] | inaktiv [TextView] | Bitte Deine PIN eingeben [TextView]
| Hast Du Deine PIN vergessen? [TextView] | Du möchtest Deine PAYBACK
°Punkte-Gutscheine einlösen? [TextView] | PAYBACK °Punkte [TextView] | Coupons
aktiv [TextView] | 0 von 0 [TextView] | Digitaler Kassenbon [TextView] | inaktiv
[TextView] | **** 2537 [TextView] | 58 von 66 [TextView] | **** 2537 [TextView]
| 58 von 66 [TextView]'
- text: 23 [TextView] | 1 [TextView] | Frankfurt am Main [TextView] | Melibocusstr.
52 [TextView] | Ändern [TextView] | Prospekte [TextView] | Alle anzeigen [TextView]
| Filialhandzettel [TextView] | ab Montag, 21.07.25 [TextView] | Bestellmagazin
[TextView] | Einfach online bestellen [TextView] | Reisemagazin [TextView] | Jetzt
Reise buchen [TextView] | Neues für Dich [TextView] | Alle anzeigen [TextView]
| Jetzt mitmachen! [TextView] | Sparen leicht gemacht! [TextView] | Deine Funktionen
[TextView] | 0 [TextView] | ºPunkte [TextView] | Digitale Kassenbons [TextView]
| Code einlösen [TextView] | Corvette-Verlosung [TextView] | Online-Shop Coupons
[TextView] | Mein Konto [TextView] | Mein PAYBACK [TextView] | Startseite [TextView]
| Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte +
Zahlen [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView]
| Ändern [TextView] | Prospekte [TextView] | Alle anzeigen [TextView] | Filialhandzettel
[TextView] | ab Montag, 21.07.25 [TextView]
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: Alibaba-NLP/gte-multilingual-base
---
# SetFit with Alibaba-NLP/gte-multilingual-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 8192 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Angebote | <ul><li>'22 [TextView] | 1 [TextView] | Angebote [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Paprika rot [TextView] | 500 g Beutel [TextView] | 2.22 / kg [TextView] | Niederlande/Spanien, Kl. I [TextView] | -34% [TextView] | statt 1.69 [TextView] | 1.11* [TextView] | Bisheriger 30 Tage\nBestpreis: 1.49* [TextView] | Müller Joghurt mit der Ecke [TextView] | 113 – 150 g [TextView] | 2.60 - 3.45 / kg [TextView] | gekühlt, versch. Sorten\nNetto plus App-Preis: 0.33 (2.20 - 2.92 / kg)\nGewährung eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons. Weitere Infos im Bereich "Coupons" der Netto plus App. [TextView] | Aktion [TextView] | 0.39* [TextView] | Leerdammer [TextView] | 100 – 140 g [TextView] | 12.07 - 16.90 / kg [TextView] | gekühlt, versch. Sorten [TextView] | -41% [TextView] | statt 2.89 [TextView] | 1.69* [TextView] | Bisheriger 30 Tage\nBestpreis: 1.79* [TextView] | Pringles [TextView] | Angebote [TextView] | -28% [TextView] | statt 2.49 [TextView] | 1.79* [TextView] | Filiale [TextView] | Pringles [TextView] | gültig von Montag, 21.07.25 - Samstag, 26.07.25 [TextView] | 165 g [TextView] | 10.85 / kg [TextView] | versch. Sorten [TextView] | Die abgebildeten Artikel können wegen des begrenzten Angebots schon am ersten Tag ausverkauft sein.\n*Alle Preise in Euro (€) inkl. gesetzlicher Mehrwertsteuer, zzgl. Versandkosten. UVP: Unverbindliche Preisempfehlung des Herstellers.\nPreise (inkl. MwSt.) und Verkaufseinheiten (Stückzahl/Mengeneinheit) können im Online-Shop abweichen.\nStatt- und durchgestrichene Preise beziehen sich auf unseren zuvor geforderten Verkaufspreis.\nAlle Artikel solange der Vorrat reicht! Änderungen und Irrtürmer vorbehalten. Abbildungen ähnlich. Die abgebildeten Artikel können wegen des begrenzten Angebots schon am ersten Tag ausverkauft sein.\nAbgabe nur in haushaltsüblichen Mengen! [TextView]'</li><li>'23 [TextView] | 1 [TextView] | Angebote [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Lauchzwiebeln [TextView] | Bund [TextView] | Deutschland [TextView] | Aktion [TextView] | 0.55* [TextView] | Speisefrühkartoffeln [TextView] | 4 kg Netz [TextView] | 0.95 / kg [TextView] | Deutschland, versch. Kocheigenschaften [TextView] | Aktion [TextView] | 3.79* [TextView] | Buschbohnen [TextView] | 500 g [TextView] | 2.98 / kg [TextView] | Deutschland, Kl. I [TextView] | Aktion [TextView] | 1.49* [TextView] | Zwetschgen [TextView] | z.B. 1 kg Schale [TextView] | Deutschland, Kl. I [TextView] | Aktion [TextView] | 2.22* [TextView] | Johannisbeeren rot [TextView] | z.B. 500 g Schale [TextView] | 3.98 / kg [TextView] | Deutschland, Kl. I [TextView] | Aktion [TextView] | 1.99* [TextView] | Heidelbeeren [TextView] | 200 g Schale [TextView] | 8.95 / kg [TextView] | Deutschland, Kl. I [TextView] | -19% [TextView] | statt 2.22 [TextView] | 1.79* [TextView] | Bisheriger 30 Tage\nBestpreis: 2.22* [TextView] | Durchsuche unsere Angebote ... [TextView] | Knüller der Woche [TextView] | Getränke [TextView] | Appxklusive Angebote [TextView] | Obst & Gemüse [TextView] | Pflanzen [TextView] | Fleisch & Wurst [TextView] | Sparangebote [TextView] | Payback°Punkte [TextView] | Weine & Spirituosen [TextView] | Backstube [TextView] | TV-Werbung [TextView] | Drogerie [TextView] | Haushalt, Freizeit & Co. [TextView] | Super Wochenende [TextView] | Wraps & Tacos [TextView] | Handwerk & Co. [TextView] | Netto-Tag [TextView] | Samstagskracher [TextView] | Wohnen & Design [TextView] | Garten & Balkon [TextView] | Sport & Fitness [TextView] | Schönheit &... [TextView] | Küche & Haushalt [TextView] | Mode [TextView] | Haus & Heimwerken [TextView] | Reinigungsmittel... [TextView] | Auto & Zweirad [TextView] | Technik & Multimedia [TextView] | Gesundheit &... [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView]'</li><li>'22 [TextView] | 1 [TextView] | Angebote [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Paprika rot [TextView] | 500 g Beutel [TextView] | 2.22 / kg [TextView] | Niederlande/Spanien, Kl. I [TextView] | -34% [TextView] | statt 1.69 [TextView] | 1.11* [TextView] | Bisheriger 30 Tage\nBestpreis: 1.49* [TextView] | Müller Joghurt mit der Ecke [TextView] | 113 – 150 g [TextView] | 2.60 - 3.45 / kg [TextView] | gekühlt, versch. Sorten\nNetto plus App-Preis: 0.33 (2.20 - 2.92 / kg)\nGewährung eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons. Weitere Infos im Bereich "Coupons" der Netto plus App. [TextView] | Müller Joghurt mit der Ecke [TextView] | 113 – 150 g [TextView] | 2.60 - 3.45 / kg [TextView] | gekühlt, versch. Sorten\nNetto plus App-Preis: 0.33 (2.20 - 2.92 / kg)\nGewährung eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons. Weitere Infos im Bereich "Coupons" der Netto plus App. [TextView] | Aktion [TextView] | 0.39* [TextView] | Leerdammer [TextView] | 100 – 140 g [TextView] | 12.07 - 16.90 / kg [TextView] | gekühlt, versch. Sorten [TextView] | -41% [TextView] | statt 2.89 [TextView] | 1.69* [TextView] | Bisheriger 30 Tage\nBestpreis: 1.79* [TextView] | Pringles [TextView] | 165 g [TextView] | 10.85 / kg [TextView] | versch. Sorten [TextView] | -28% [TextView] | statt 2.49 [TextView] | 1.79* [TextView] | Bisheriger 30 Tage\nBestpreis: 2.49* [TextView] | Mövenpick [TextView] | 500 g [TextView] | 13.98 / kg [TextView] | Kaffee gemahlen oder Ganze Bohnen,versch. Sorten\nNetto plus App-Preis: 6.49 (12.98 / kg)\nGewährung eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons. Weitere Infos im Bereich "Coupons" der Netto plus App. [TextView] | -34% [TextView] | UVP 10.69 [TextView] | 6.99* [TextView] | Durchsuche unsere Angebote ... [TextView] | Knüller der Woche [TextView] | Getränke [TextView] | Appxklusive Angebote [TextView] | Obst & Gemüse [TextView] | Pflanzen [TextView] | Fleisch & Wurst [TextView] | Sparangebote [TextView] | Payback°Punkte [TextView] | Weine & Spirituosen [TextView] | Backstube [TextView] | TV-Werbung [TextView] | Drogerie [TextView] | Haushalt, Freizeit & Co. [TextView] | Super Wochenende [TextView] | Wraps & Tacos [TextView] | Handwerk & Co. [TextView] | Netto-Tag [TextView] | Samstagskracher [TextView] | Wohnen & Design [TextView] | Garten & Balkon [TextView] | Sport & Fitness [TextView] | Schönheit &... [TextView] | Küche & Haushalt [TextView] | Mode [TextView] | Haus & Heimwerken [TextView] | Reinigungsmittel... [TextView] | Auto & Zweirad [TextView] | Technik & Multimedia [TextView] | Gesundheit &... [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView] | Angebote [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Mövenpick [TextView] | 500 g [TextView] | 13.98 / kg [TextView] | Kaffee gemahlen oder Ganze Bohnen,versch. Sorten\nNetto plus App-Preis: 6.49 (12.98 / kg)\nGewährung eines Netto plus App-Preises nur bei Einlösung des zugehörigen App-Coupons. Weitere Infos im Bereich "Coupons" der Netto plus App. [TextView] | -34% [TextView] | UVP 10.69 [TextView] | 6.99* [TextView] | Salami [TextView] | 200 g [TextView] | 6.45 / kg [TextView] | versch. Sorten\nRegionale Verpackungsabweichungen möglich [TextView] | -23% [TextView] | statt 1.69 [TextView] | 1.29* [TextView] | Bisheriger 30 Tage\nBestpreis: 1.69* [TextView] | Zott Monte [TextView] | 4 x 100 g [TextView] | 3.73 / kg [TextView] | gekühlt, versch. Sorten [TextView] | -44% [TextView] | statt 2.69 [TextView] | 1.49* [TextView] | Bisheriger 30 Tage\nBestpreis: 1.49* [TextView] | Melitta Barista [TextView] | 1 kg [TextView] | ganze Kaffeebohnen, versch. Sorten [TextView] | -34% [TextView] | UVP 21.49 [TextView] | 13.99* [TextView]'</li></ul> |
| Coupons | <ul><li>'23 [TextView] | 1 [TextView] | Coupons [TextView] | 1 [TextView] | Meine Coupons (17) [TextView] | App-Angebote (57) [TextView] | Filiale [TextView] | gültig bis\n 27.07.2025 [TextView] | 10fach °P [TextView] | auf den Einkauf in der Filiale!* [TextView] | Aktiv [TextView] | OnlineShop [TextView] | gültig bis\n 28.07.2025 [TextView] | 10% [TextView] | Rabatt [TextView] | auf Gutscheingold Restaurant & City Geschenkcode [TextView] | Zum Online-Shop [TextView] | Online [TextView] | Online [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView] | Coupons [TextView] | 1 [TextView] | Meine Coupons (17) [TextView] | Coupons [TextView] | 1 [TextView] | Meine Coupons (17) [TextView] | App-Angebote (57) [TextView] | gültig bis\n 27.07.2025 [TextView] | auf ausgewählte SUPs [TextView] | Zum Online-Shop [TextView] | gültig bis\n 03.08.2025 [TextView] | 40€ [TextView] | App-Rabatt [TextView] | auf Rolektro E-Joy [TextView] | Zum Online-Shop [TextView] | Online [TextView] | gültig bis\n 28.07.2025 [TextView] | 100€ [TextView] | Rabatt [TextView] | auf ausgewählte Mobilitätsartikel [TextView] | Zum Online-Shop [TextView] | Online [TextView] | 12€ [TextView] | Online [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Meine Coupons (17) [TextView] | App-Angebote (57) [TextView] | gültig bis\n 03.08.2025 [TextView] | 40€ [TextView] | App-Rabatt [TextView] | auf Rolektro E-Joy [TextView] | Zum Online-Shop [TextView] | gültig bis\n 28.07.2025 [TextView] | 100€ [TextView] | Rabatt [TextView] | auf ausgewählte Mobilitätsartikel [TextView] | Zum Online-Shop [TextView] | Online [TextView] | gültig bis\n 27.07.2025 [TextView] | 12€ [TextView] | App-Rabatt [TextView] | auf Veska Balkonkraftwerk [TextView] | Zum Online-Shop [TextView] | Online [TextView]'</li><li>'23 [TextView] | 1 [TextView] | Meine Coupons (17) [TextView] | App-Angebote (57) [TextView] | gültig bis\n 26.07.2025 [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -20% [TextView] | 4.00€ [TextView] | App-Rabatt [TextView] | Kinder Tretroller [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -30% [TextView] | 3.00€ [TextView] | App-Rabatt [TextView] | auf Kinder Tieffußbett Pantoletten [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | -30% [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -20% [TextView] | 4.00€ [TextView] | App-Rabatt [TextView] | Kinder Tretroller [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -30% [TextView] | 3.00€ [TextView] | App-Rabatt [TextView] | auf Kinder Tieffußbett Pantoletten [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -20% [TextView] | 4.00€ [TextView] | App-Rabatt [TextView] | Kinder Tretroller [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -30% [TextView] | 3.00€ [TextView] | App-Rabatt [TextView] | auf Kinder Tieffußbett Pantoletten [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | -30% [TextView] | gültig bis\n 26.07.2025 [TextView] | 3.00€ [TextView] | App-Rabatt [TextView] | auf Kinder Tieffußbett Pantoletten [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -30% [TextView] | 3.00€ [TextView] | App-Rabatt [TextView] | Wasserspritz-Rucksack [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -33% [TextView] | 2.00€ [TextView] | App-Rabatt [TextView] | auf Kinder Slips oder Boxer [TextView] | Aktiv [TextView]'</li><li>'23 [TextView] | 1 [TextView] | Meine Coupons (26) [TextView] | App-Angebote (57) [TextView] | gültig bis\n 26.07.2025 [TextView] | -15% [TextView] | 0.06€ [TextView] | App-Rabatt [TextView] | auf Müller Joghurt mit der Ecke [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | 1.00€ [TextView] | App-Rabatt [TextView] | auf Spaten Münchner Hell oder Alkoholfrei [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | -34% [TextView] | 0.80€ [TextView] | App-Rabatt [TextView] | auf Mühlhäuser Thüringer Pflaumenmus [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView] | Meine Coupons (26) [TextView] | App-Angebote (57) [TextView] | gültig bis\n 26.07.2025 [TextView] | auf Müller Joghurt mit der Ecke [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | 1.00€ [TextView] | App-Rabatt [TextView] | auf Spaten Münchner Hell oder Alkoholfrei [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -34% [TextView] | 0.80€ [TextView] | App-Rabatt [TextView] | auf Mühlhäuser Thüringer Pflaumenmus [TextView] | Aktiv [TextView] | gültig bis\n 26.07.2025 [TextView] | App-Rabatt [TextView] | auf Spaten Münchner Hell oder Alkoholfrei [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -34% [TextView] | 0.80€ [TextView] | App-Rabatt [TextView] | auf Mühlhäuser Thüringer Pflaumenmus [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -44% [TextView] | 0.70€ [TextView] | App-Rabatt [TextView] | auf Knorr Soße [TextView] | Aktiv [TextView] | gültig bis\n 26.07.2025 [TextView] | gültig bis\n 26.07.2025 [TextView] | -44% [TextView] | 0.70€ [TextView] | App-Rabatt [TextView] | auf Knorr Soße [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView] | gültig bis\n 26.07.2025 [TextView] | -10% [TextView] | 0.30€ [TextView] | App-Rabatt [TextView] | auf Nestle Minis +1 [TextView] | Aktiv [TextView] | Angebot mehrfach einlösbar [TextView]'</li></ul> |
| Kundenkarte | <ul><li>'Melibocusstr. 52 [TextView] | 60528 Frankfurt am Main [TextView] | Ändern [TextView] | Karte + Zahlen [TextView] | Nur Karte [TextView] | Code gültig: 5:00 Min. [TextView] | Zahlung: [TextView] | PayPal [TextView] | Code: [TextView] | PAYBACK °Punkte [TextView] | **** 2537 [TextView] | Coupons aktiv [TextView] | 58 von 66 [TextView] | Digitaler Kassenbon [TextView] | inaktiv [TextView] | Bitte Deine PIN eingeben [TextView] | Hast Du Deine PIN vergessen? [TextView] | Du möchtest Deine PAYBACK\n°Punkte-Gutscheine einlösen? [TextView] | PAYBACK °Punkte [TextView] | **** 2537 [TextView] | Coupons aktiv [TextView] | 58 von 66 [TextView] | Digitaler Kassenbon [TextView] | inaktiv [TextView]'</li><li>'Melibocusstr. 52 [TextView] | 60528 Frankfurt am Main [TextView] | Ändern [TextView] | Karte + Zahlen [TextView] | Nur Karte [TextView] | Code gültig: 5:00 Min. [TextView] | Zahlung: [TextView] | PayPal [TextView] | Code: [TextView] | PAYBACK °Punkte [TextView] | Coupons aktiv [TextView] | 0 von 0 [TextView] | Digitaler Kassenbon [TextView] | inaktiv [TextView] | Bitte Deine PIN eingeben [TextView] | Hast Du Deine PIN vergessen? [TextView] | Du möchtest Deine PAYBACK\n°Punkte-Gutscheine einlösen? [TextView] | PAYBACK °Punkte [TextView] | Coupons aktiv [TextView] | 0 von 0 [TextView] | Digitaler Kassenbon [TextView] | inaktiv [TextView] | 23 [TextView] | 1 [TextView] | Angebote [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Lauchzwiebeln [TextView] | Bund [TextView] | Deutschland [TextView] | Aktion [TextView] | 0.55* [TextView] | Speisefrühkartoffeln [TextView] | 4 kg Netz [TextView] | 0.95 / kg [TextView] | Deutschland, versch. Kocheigenschaften [TextView] | Aktion [TextView] | 3.79* [TextView] | Buschbohnen [TextView] | 500 g [TextView] | 2.98 / kg [TextView] | Deutschland, Kl. I [TextView] | Aktion [TextView] | 1.49* [TextView] | Zwetschgen [TextView] | z.B. 1 kg Schale [TextView] | Deutschland, Kl. I [TextView] | Aktion [TextView] | 2.22* [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView]'</li><li>'Melibocusstr. 52 [TextView] | 60528 Frankfurt am Main [TextView] | Ändern [TextView] | Karte + Zahlen [TextView] | Nur Karte [TextView] | Code gültig: 5:00 Min. [TextView] | Zahlung: [TextView] | PayPal [TextView] | Code: [TextView] | PAYBACK °Punkte [TextView] | Coupons aktiv [TextView] | 0 von 0 [TextView] | Digitaler Kassenbon [TextView] | inaktiv [TextView] | Bitte Deine PIN eingeben [TextView] | Hast Du Deine PIN vergessen? [TextView] | Du möchtest Deine PAYBACK\n°Punkte-Gutscheine einlösen? [TextView] | PAYBACK °Punkte [TextView] | Coupons aktiv [TextView] | 0 von 0 [TextView] | Digitaler Kassenbon [TextView] | inaktiv [TextView]'</li></ul> |
| Other | <ul><li>'Listen verwalten [TextView] | Offen [TextView] | Erledigt [TextView] | Meine Einkaufsliste (26) [TextView] | Chiquita Avocado [TextView] | 1x [TextView] | Deutsche Markenbutter [TextView] | 1x [TextView] | Gouda in Scheiben [TextView] | 1x [TextView] | Gourmeat Hähnchenschenkel [TextView] | 2x [TextView] | Gurken [TextView] | 1x [TextView] | Haribo Duftkerze [TextView] | 1x [TextView] | Himbeeren [TextView] | 1x [TextView] | Kneipp Geschenkset [TextView] | 1x [TextView] | Lorado Asia Nudeln 85g [TextView] | 1x [TextView] | Milchschnitte 5x28g [TextView] | 1x [TextView] | Milka Großtafel [TextView] | 1x [TextView] | Mini Pflaumen Tomaten [TextView] | 1x [TextView] | Artikel bearbeiten [TextView] | Sortierung ändern [TextView] | Artikel hinzufügen [TextView]'</li><li>'23 [TextView] | 1 [TextView] | Online-Shop [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView]'</li><li>'Listen verwalten [TextView] | Offen [TextView] | Erledigt [TextView] | Meine Einkaufsliste (25) [TextView] | NeoprenSortiment sort.1ST [TextView] | 1x [TextView] | Artikel bearbeiten [TextView] | Sortierung ändern [TextView] | Artikel hinzufügen [TextView] | NeoprenSortiment sort.1ST [TextView] | 1x [TextView] | Nutella [TextView] | 1x [TextView] | Nutella Nuss-Nugat-Creme [TextView] | 1x [TextView] | Peperoni Baguette [TextView] | 1x [TextView] | Toilettenpapier [TextView] | 1x [TextView] | Trauben hell [TextView] | 1x [TextView] | käse [TextView] | 1x [TextView] | Chiquita Avocado [TextView] | 1x [TextView] | Deutsche Markenbutter [TextView] | 1x [TextView] | Gouda in Scheiben [TextView] | 1x [TextView] | Gourmeat Hähnchenschenkel [TextView] | 2x [TextView] | Gurken [TextView] | 1x [TextView] | Haribo Duftkerze [TextView] | 1x [TextView] | Himbeeren [TextView] | 1x [TextView] | Kneipp Geschenkset [TextView] | 1x [TextView] | Lorado Asia Nudeln 85g [TextView] | 1x [TextView]'</li></ul> |
| Start | <ul><li>'23 [TextView] | 1 [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Neues für Dich [TextView] | Alle anzeigen [TextView] | Jetzt mitmachen! [TextView] | Sparen leicht gemacht! [TextView] | Deine Funktionen [TextView] | 0 [TextView] | ºPunkte [TextView] | Digitale Kassenbons [TextView] | Code einlösen [TextView] | Corvette-Verlosung [TextView] | Online-Shop Coupons [TextView] | Rezepte [TextView] | Alle anzeigen [TextView] | Cheeseburger Muffins [TextView] | Nudelsalat Klassiker [TextView] | Mein Konto [TextView] | Mein PAYBACK [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView] | Cheeseburger Muffins [TextView] | Nudelsalat Klassiker [TextView]'</li><li>'22 [TextView] | 1 [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Prospekte [TextView] | Alle anzeigen [TextView] | Filialhandzettel [TextView] | ab Montag, 21.07.25 [TextView] | Bestellmagazin [TextView] | Einfach online bestellen [TextView] | Reisemagazin [TextView] | Jetzt Reise buchen [TextView] | Neues für Dich [TextView] | Alle anzeigen [TextView] | Jetzt mitmachen! [TextView] | Sparen leicht gemacht! [TextView] | Deine Funktionen [TextView] | 0 [TextView] | ºPunkte [TextView] | Digitale Kassenbons [TextView] | Code einlösen [TextView] | Corvette-Verlosung [TextView] | Online-Shop Coupons [TextView] | Mein Konto [TextView] | Mein PAYBACK [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView]'</li><li>'Gewinne 1 Jahr lang Corvette fahren [TextView] | Du hast noch kein Los im Gewinnspiel-Topf. Jetzt in einer teilnehmenden Filiale einkaufen und an der Kasse Deine Netto plus App scannen und Deine Gewinnchance erhöhen. [TextView] | Aktuelles Gewinnspiel [TextView] | 0 [TextView] | Gewinnlose [TextView] | im Gewinnspiel-Topf [TextView] | Gewinnspiel-Topf schließt in: [TextView] | 5T : 08Std : 24Min [TextView] | Letztes Gewinnspiel [TextView] | Bitte beachte, dass die aktuelle Ziehung noch aussteht. [TextView] | Ziehung läuft ... [TextView] | Bis Mittwoch wird der oder die Gewinner/Gewinnerin per E-Mail informiert. [TextView] | Zurück [TextView] | Gewinne 1 Jahr lang Corvette fahren [TextView] | Du hast noch kein Los im Gewinnspiel-Topf. Jetzt in einer teilnehmenden Filiale einkaufen und an der Kasse Deine Netto plus App scannen und Deine Gewinnchance erhöhen. [TextView] | Aktuelles Gewinnspiel [TextView] | 0 [TextView] | Gewinnlose [TextView] | im Gewinnspiel-Topf [TextView] | Gewinnspiel-Topf schließt in: [TextView] | 5T : 08Std : 24Min [TextView] | Letztes Gewinnspiel [TextView] | 23 [TextView] | 1 [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Prospekte [TextView] | Alle anzeigen [TextView] | Filialhandzettel [TextView] | ab Montag, 21.07.25 [TextView] | Bestellmagazin [TextView] | Einfach online bestellen [TextView] | Reisemagazin [TextView] | Jetzt Reise buchen [TextView] | Neues für Dich [TextView] | Alle anzeigen [TextView] | Jetzt mitmachen! [TextView] | Sparen leicht gemacht! [TextView] | Deine Funktionen [TextView] | 0 [TextView] | ºPunkte [TextView] | Digitale Kassenbons [TextView] | Code einlösen [TextView] | Corvette-Verlosung [TextView] | Online-Shop Coupons [TextView] | Mein Konto [TextView] | Mein PAYBACK [TextView] | Startseite [TextView] | Angebote [TextView] | Coupons [TextView] | Online-Shop [TextView] | Karte + Zahlen [TextView] | Frankfurt am Main [TextView] | Melibocusstr. 52 [TextView] | Ändern [TextView] | Prospekte [TextView] | Alle anzeigen [TextView] | Filialhandzettel [TextView] | ab Montag, 21.07.25 [TextView] | Bestellmagazin [TextView] | Einfach online bestellen [TextView] | Reisemagazin [TextView] | Jetzt Reise buchen [TextView] | Neues für Dich [TextView] | Alle anzeigen [TextView] | Jetzt mitmachen! [TextView] | Sparen leicht gemacht! [TextView] | Deine Funktionen [TextView] | 0 [TextView] | ºPunkte [TextView] | Digitale Kassenbons [TextView] | Code einlösen [TextView] | Corvette-Verlosung [TextView] | Online-Shop Coupons [TextView]'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("SzymonKozl/setfit-app-class-netto-balanced-v1")
# Run inference
preds = model("Melibocusstr. 52 [TextView] | 60528 Frankfurt am Main [TextView] | Ändern [TextView] | Karte + Zahlen [TextView] | Nur Karte [TextView] | Code gültig: 5:00 Min. [TextView] | Zahlung: [TextView] | PayPal [TextView] | Code: [TextView] | PAYBACK °Punkte [TextView] | Coupons aktiv [TextView] | 0 von 0 [TextView] | Digitaler Kassenbon [TextView] | inaktiv [TextView] | Bitte Deine PIN eingeben [TextView] | Hast Du Deine PIN vergessen? [TextView] | Du möchtest Deine PAYBACK
°Punkte-Gutscheine einlösen? [TextView] | PAYBACK °Punkte [TextView] | Coupons aktiv [TextView] | 0 von 0 [TextView] | Digitaler Kassenbon [TextView] | inaktiv [TextView] | **** 2537 [TextView] | 58 von 66 [TextView] | **** 2537 [TextView] | 58 von 66 [TextView]")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:-----|
| Word count | 1 | 254.8 | 1055 |
| Label | Training Sample Count |
|:------------|:----------------------|
| Angebote | 8 |
| Coupons | 8 |
| Kundenkarte | 8 |
| Other | 8 |
| Start | 8 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0031 | 1 | 0.1572 | - |
| 0.1562 | 50 | 0.1778 | - |
| 0.3125 | 100 | 0.08 | - |
| 0.4688 | 150 | 0.0897 | - |
| 0.625 | 200 | 0.0693 | - |
| 0.7812 | 250 | 0.0519 | - |
| 0.9375 | 300 | 0.0373 | - |
### Framework Versions
- Python: 3.10.8
- SetFit: 1.1.2
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0
- PyTorch: 2.7.1+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/Lyrical-gpt-GGUF
|
mradermacher
| 2025-08-06T11:07:09Z | 112 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:solidjoe/Lyrical-gpt",
"base_model:quantized:solidjoe/Lyrical-gpt",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T11:05:48Z |
---
base_model: solidjoe/Lyrical-gpt
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/solidjoe/Lyrical-gpt
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lyrical-gpt-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Lyrical-gpt-GGUF/resolve/main/Lyrical-gpt.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Thireus/GLM-4.5-THIREUS-IQ2_K-SPECIAL_SPLIT
|
Thireus
| 2025-08-06T11:06:17Z | 13 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-02T23:24:45Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
mlx-community/VisualQuality-R1-7B-bf16
|
mlx-community
| 2025-08-06T11:06:01Z | 13 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2_5_vl",
"IQA",
"Reasoning",
"VLM",
"Pytorch",
"R1",
"GRPO",
"RL2R",
"reinforcement-learning",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:mit",
"region:us"
] |
reinforcement-learning
| 2025-08-06T10:45:50Z |
---
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
pipeline_tag: reinforcement-learning
tags:
- IQA
- Reasoning
- VLM
- Pytorch
- R1
- GRPO
- RL2R
- mlx
---
# mlx-community/VisualQuality-R1-7B-bf16
This model was converted to MLX format from [`TianheWu/VisualQuality-R1-7B`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/TianheWu/VisualQuality-R1-7B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/VisualQuality-R1-7B-bf16 --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
mainak1606/phi3-medical-soap-summaryQQ1
|
mainak1606
| 2025-08-06T10:49:44Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"trl",
"sft",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T10:43:03Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MonsterMMORPG/Generative-AI
|
MonsterMMORPG
| 2025-08-06T10:48:44Z | 0 | 5 | null |
[
"region:us"
] | null | 2023-12-25T21:32:42Z |
# Constantly Updated Patreon Projects on : https://www.patreon.com/c/SECourses
## Patreon Exclusive Content
### March 25 - 2025
[Wan 2.1 Ultra Advanced Gradio APP for - Works as low as 4GB VRAM - 1-Click Installers for Windows, RunPod, Massed Compute - Batch Processing - T2V - I2V - V2V](https://www.patreon.com/posts/123105403)
* 1-Click to install and use the most powerful ever released Image to Video, Text to Video and Video to Video Wan 2.1 Open source model. Public Post to see APP features : https://www.patreon.com/posts/123114193
### March 25 - 2025
[ComfyUI Auto Installer with FaceID, IP-Adapter, InsightFace, Reactor, Triton, DeepSpeed, Flash Attention, Sage Attention Automatic Installers for Windows, RunPod, Massed Compute, Linux](https://www.patreon.com/posts/105023709)
* 1 Click auto installers for ComfyUI latest version for Windows, Massed Compute and RunPod. Installs latest version of ComfyUI into an isolated Python venv. Auto download best SDXL and SD 1.5 models and auto installs ComfyUI manager, FaceID, InsightFace, Triton, DeepSpeed, Flash Attention, Sage Attention and Ip Adapter of ComfyUI on Windows Python VENV (not WSL). Also includes automatic downloader.py file to download all of the IP Adapter, InsightFace and FaceID models for you. Moreover we have 1-Click installer for Reactor extension with its all dependencies and auto download of necessary models.
### March 20 - 2025
[MMAudio 1-Click Installers for Windows, RunPod and Massed Compute - Generate Audio for Any Video - Amazing for AI Generated Videos](https://www.patreon.com/posts/117990364)
* 1-Click installers to install MMAudio. This app is so lightweight and fast. Pretty much should work on every GPUs. I have also improved the official published Gradio app and added more features. MMAudio generates synchronized audio given video and/or text inputs. Our key innovation is multimodal joint training which allows training on a wide range of audio-visual and audio-text datasets. Moreover, a synchronization module aligns the generated audio with the video frames.
### March 20 - 2025
[MagicQuill 1-Click Installers for Windows, RunPod and Massed Compute - Amazing and Ultra Fast Inpaint Model](https://www.patreon.com/posts/117326651)
* 1-Click Installer files for MagicQuill app to install on Windows, RunPod and Massed Compute : https://github.com/magic-quill/MagicQuill
### March 20 - 2025
[VisoMaster Automatic Installer - The Most Advanced 0-Shot Face Swap / Deep Fake APP - State of the Art - Windows and Massed Compute](https://www.patreon.com/posts/121570322)
* 1-Click to install VisoMaster on Windows and also on Massed Compute (for GPU poor). VisoMaster is a powerful yet easy-to-use tool for face swapping (FaceSwap/DeepFake) and editing in images and videos. It utilizes AI to produce natural-looking results with minimal effort, making it ideal for both casual users and professionals.
### March 18 - 2025
[Invoke AI Latest Version Windows, RunPod and Massed Compute 1-Click Installers](https://www.patreon.com/posts/112912425)
* 1-Click to install latest version of InvokeAI on your Windows computer, on RunPod and on Massed Compute with a super detailed and easy Tutorial video and written scripts. This zip file also has instructions to use the Invoke AI on your Windows Computer browser while running on Massed Compute securely by using PowerShell commands.
### March 16 - 2025
[LivePortrait Upgraded Latest Version Human and Animal Version 1-Click Installers for Windows, RunPod, Massed Compute and a Free Kaggle Account Notebook - Blazing Fast Static Image or Video to Video to Talking and Moving Animation](https://www.patreon.com/posts/119254105)
* 1-Click installers for Latest and improved version of LivePortrait for Windows, RunPod, Massed Compute and a Free Kaggle Account notebook. It supports latest LivePortrait animal v1.1 version.
### March 15 - 2025
[Most Advanced 1-Click DeepFake / FaceSwap App Rope, Rope Live, Rope Alucard and Rope NEXT Installers for Windows and Massed Compute and Linux](https://www.patreon.com/posts/105123768)
* The easiest and most powerful 1-click DeepFake / FaceSwap open source app Rope, Rope Live, Rope Alucard and Rope NEXT installers for Windows, Massed Compute (Cloud), Linux and a lot of configurations and test results shared
### March 13 - 2025
[A Kaggle Notebook to Train Stable Diffusion 1.5 and XL (SDXL) on a Free Kaggle Account with 2x Dual T4 GPU for free by using Kohya GUI](https://www.patreon.com/posts/88397937)
* A Kaggle NoteBook to do Stable Diffusion 1.5 and XL (SDXL) training. Fully supports 2x T4 dual GPU to speed up training. Kohya GUI is used to do DreamBooth / Fine-Tuning and LoRA Trainings.
### March 13 - 2025
[Hugging Face Upload / Download Notebook - Supports Private Repositories and Multi Commit as Well](https://www.patreon.com/posts/104672510)
* If you are looking for convenient and fast way to save and download your files from Hugging Face, this notebook will do the job. 1-click easy
### March 13 - 2025
[Virtual Try-on (IDM-VTON) 1 Click Installers - Try Any Clothing Immediately On Anyone - Windows - RunPod - Massed Compute - Kaggle - Works even on Object Transfer](https://www.patreon.com/posts/122718239)
* 1 Click installers for IDM-VTON (the one of the very best virtual try on clothing and anything) for Windows, RunPod, Massed Compute and a free Kaggle account. Our app also has extra features compared to official IDVM-VTON. It automatically crops and paste back images and supports quantization and CPU offloading. Public post for more info : https://www.patreon.com/posts/122721073
### March 12 - 2025
[AuraSR GigaGAN 4x Upscaler Gradio APP, Installers for Windows, RunPod, Massed Compute and free Kaggle Account with Seams Fix and Batch Processing](https://www.patreon.com/posts/121441873)
* 1-Click to install and use locally (Windows) and also popular cloud services famous newest AuraSR GigaGAN 4x upscaler with batch upscaling
### March 11 - 2025
[SwarmUI Easy Ultra Fast and Robust Unified Downloader for Stable Diffusion 3.5, FLUX, Mochi 1, SDXL and SD 1.5](https://www.patreon.com/posts/114517862)
* Download the very best Stable Diffusion Large 3.5, FLUX, SDXL, Mochi 1 (SOTA text-to-video), FLUX Tools (Inpainting, Outpainting, Canny, Depth, Redux), Latent Upscaler Models like ESGRAN, Improved new Clip-l that works better and SD 1.5 models into the accurate SwarmUI folders with 1-click and ultra fast and robustness.
### March 5 - 2025
[Blazing Fast SD Forge Web UI Latest Version Windows, RunPod and Massed Compute Automatic Installers and Unified Model Downloaders for SD 1.5, SDXL, FLUX and More Newer Models](https://www.patreon.com/posts/118442039)
* 1-Click to install latest SD Forge Web UI on Windows, RunPod and Massed Compute and download all of the amazing FLUX, SD 1.5, SDXL and SD3.5 and many more. Our installers and downloader scripts are super optmized that you will see even 1000 MB per second download speeds if your internet speed is sufficient.
### March 5 - 2025
[1 Click Installer for Automatic1111 SD Web UI, SDXL, ControlNet, All ControlNet Models, TensorRT (RTX Accelerator) on Windows](https://www.patreon.com/posts/86307255)
* Automatic Windows installer script for SDXL and Automatic1111 Web UI. Downloads latest SDXL base with fixed VAE and best SD 1.5 and SDXL models. Moreover it automatically installs and lets you download newest NVIDIA RTX Accelerator - TensorRT which brings 70%+ Speed Up. Moreover, it will automatically install ControlNet and download all available ControlNet models for you. Furthermore, it will auto install After Detailer (ADetailer) and Reactor extensions and latest Torch and xFormers. All these installations are optional and you can install any of them.
### Feburary 25 - 2025
[Gradio APP for Deterministic Text and Graphs Having Images Upscaling based on ImageMagick - Windows, RunPod, Massed Compute](https://www.patreon.com/posts/123071348)
* If you need to upscale images with 100% accurate text preservation this APP is what you need. A Gradio based multi-threaded batch upscaler APP that utilizes ImageMagick with amazing upscale presets. Public Post to see APP features : https://www.patreon.com/posts/123073046
### Feburary 22 - 2025
[Free Kaggle Account Notebook for SwarmUI with FLUX, SD 1.5, SDXL & Stable Diffusion 3.5 Large, FLUX, Hunyuan and Dual T4 GPU support](https://www.patreon.com/posts/106650931)
* Use very advanced SwarmUI on a free Kaggle account for free with dual T4 GPU. Fully supports SD 1.5, SDXL, SD3, FLUX, FLUX Tools (Redux, Canny, Depth, Inpainting), Stable Diffusion 3.5 Large, Stable Cascade, Hunyuan, SkyReels, Mochi 1 and more
### Feburary 7 - 2025
[BiRefNet HR (High Resolution) Gradio APP and 1-Click Installers for Windows, RunPod, Massed Compute and a Free Kaggle Account Notebook](https://www.patreon.com/posts/121679760)
* BiRefNet HR Version Automatic Installers for Windows, RunPod, Massed Compute and a free Kaggle notebook. The very best SOTA background remover with Gradio APP. It is updated to newest High Resolution model and it supports batch processing fully with new half precision lower VRAM feature.
### Feburary 2 - 2025
[DeepFace Based Batch Face Similarity Sorting Gradio APP For Windows, RunPod and Massed Compute - 1-Click to Install - Uses TensorFlow GPU - Very Fast](https://www.patreon.com/posts/121335747)
* With DeepFace & RetinaFace libraries you can sort AI images or basically any images based on their similarity to the given single or multiple images (average taken). We have developed a batch processing Gradio APP for this task that installs libraries into a Python 3.10 VENV and works perfect on Windows. Also we have 1-click installers for RunPod and Massed Compute as well. The APP is fully multi-threaded.
### Feburary 1 - 2025
[1-Click Installers for Paints-UNDO Advanced Gradio APP, Windows, RunPod, Massed Compute and Kaggle Installers](https://www.patreon.com/posts/121228327)
* 1-Click Installers for Paints-UNDO from lllyasviel. 1-Click Install on Windows (Python 3.10 isolated VENV), RunPod, Massed Compute and a free Kaggle Notebook. Official repo here but our APP is improved version : https://github.com/lllyasviel/Paints-UNDO
### January 31 - 2025
[EasyAnimate 1-Click Install Windows, RunPod and Massed Compuıte, Newest SOTA Local Open Source Image-to-Video, Text-to-Video and More](https://www.patreon.com/posts/115888558)
* 1-Click Installers for EasyAnimate. It is literally Runway but Open Source and FREE, Text-to-Video, Image-to-Video (both beginning and ending frame), Video-to-Video, Works on 24 GB GPUs on Windows, supports 960px resolution, supports very long videos with Overlap. Install on Windows, RunPod and Massed Compute
### January 20 - 2025
[Ultimate Image Processing APP : Batch Cropping, Zooming In, Resizing, Duplicate Image Removing, Face Extraction, SAM 2 and Yolo Segmentation, Masking for Windows, RunPod, Massed Compute and Free Kaggle Account](https://www.patreon.com/posts/120352012)
* If you want to batch pre-process your training images, like auto zoom subject and resize perfectly into exact resolution, this is the script you need. 1-Click to install and use on Windows, RunPod and Massed Comput and even Kaggle. It supports YOLO V11 and SAM 2. Moreover it has even duplicate image removing as well. You can see all features in this public post : https://www.patreon.com/posts/120353641
### January 17 - 2025
[1-Click Installers for Most Powerful Vision Model CogVLM V2 and Batch Captioning Images, Install on Windows, RunPod and Massed Compute](https://www.patreon.com/posts/120193330)
* CogVLM 2 is an open source vision model that is par with GPT-4 of OpenAI. This post has 1-Click installers for CogVLM 2 Gradio APP. It installs into Python 3.10 VENV and fully supports Triton and 4-bit quantization. Works amazing and perfect. Has batch processing and many other features. For features, screenshot and more information check this public post : https://www.patreon.com/posts/120195496
### January 16 - 2025
[TRELLIS 1-Click Installers for Windows, RunPod, Massed Compute, New SOTA Image-to-3D Full Model - Mind Blowing](https://www.patreon.com/posts/117470976)
* 1-Click installers to install TRELLIS on Windows, RunPod and Massed Compute. https://github.com/microsoft/TRELLIS. This is the SOTA model for image to full 3D generation and it is blazing fast
### January 13 - 2025
[From NVIDIA Labs SANA Text-to-Image Model 1-Click Installers for Windows, RunPod, Massed Compute and Free Kaggle Account](https://www.patreon.com/posts/116474081)
* 1-Click installers for NVIDIA Labs SANA (Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer) model with an amazing Gradio APP developed by SECourses. Windows, RunPod, Massed Compute and Kaggle. Pretty lighweight and fast model to use. 1K (1024x1024) : 4 GB GPUs. 2K (2048x2048) : 6 GB GPUs. 4K (4096x4096) : 8 GB GPUs
### January 10 - 2025
[Ultra Advanced InstantID Gradio APP - Automatic Installers for Windows, RunPod and Massed Compute and Free Kaggle - Using the Very Best ControlNet Xinsir Models](https://www.patreon.com/posts/118469722)
* Ultra advanced InstantID Gradio APP : 0-shot Identity-Preserving Generation in Seconds Gradio APP with latest best Xinsir ControlNet models. We have so many extra features compared to official InstantID APP plus we have 1-click very easy install on Windows, RunPod, Massed Compute and a free Kaggle Account notebook that works perfect
### January 8 - 2025
[IC-Light (Most Advanced 1-Click Relight / Re-Light) 1-Click Installer for Windows, RunPod, Massed Compute & Kaggle Notebook](https://www.patreon.com/posts/119566071)
* 1-click auto installer scripts for IC-Light (Re-Light / Relight any image) for Windows, Massed Compute, RunPod, Linux and a free Kaggle Notebook
### January 4 - 2025
[Very Advanced Joy Caption APPs, Supports Batch Processing and Multi-GPU Captioning, Supports Joy Caption Pre Alpha, Alpha One, Alpha Two, 1-Click installers](https://www.patreon.com/posts/118827960)
* Ultra advanced Gradio app for JoyCaption Pre-Alpha, Alpha One and Alpha Two with batch image captioning in addition to Multi-GPU support. 1-Click to install on Windows, RunPod and Massed Compute
### January 1 - 2025
[1-Click Installers for One of the Best Face Swap / Easy DeepFake APPs Roop Unleashed for Windows, RunPod and Massed Compute](https://www.patreon.com/posts/119081500)
* 1-Click installers for one of the very best Deep Fake / Face Swap app Roop Unleashed for Windows, RunPod and Massed Compute. This app uses Gradio interface and supports live Web Cam replace as well for streams like Google Meeting.
### December 23 - 2024
[FaceFusion V3 (Most Advanced 1-Click DeepFake) 1-Click Installers for Windows, RunPod and Massed Compute - Use locally and cloud](https://www.patreon.com/posts/103765029)
* FaceFusion 3 Windows (local), RunPod and Massed Compute (Cloud) 1-Click installers. Install and run with easiness. The best DeepFake APP
### December 20 - 2024
[SUPIR 1 Click Windows, RunPod / Massed Compute / Linux Installer & Free Kaggle Notebook](https://www.patreon.com/posts/99176057)
* 1 Click Windows and RunPod / Massed Compute / Linux Installer and a free Kaggle Notebook For The New SOTA Image Upscaling and Enhancing Open Source SUPIR Model. Better than Magnify AI. SUPIR is the very best AI image upscaler at the moment.
### December 11 - 2024
[1-Click Automatic Installers for Hunyuan3D-1 Text to 3D and Image to 3D SOTA Model, Install on Windows, RunPod and Massed Compute](https://www.patreon.com/posts/115412205)
* 1-Click Python venv installers for Hunyuan3D-1.0. The installers support Windows (use locally - works on 24 GB GPUs super fast tested on RTX 3090), RunPod and Massed Compute (use on cloud). Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generationr : https://github.com/tencent/Hunyuan3D-1
### December 4 - 2024
[SwarmUI Master Tutorial - Use Stable Diffusion 3.5 Large and FLUX model with Amazing Performance and more](https://www.patreon.com/posts/106135985)
* In this public tutorial post I will share all the necessary stuff regarding how to use SD 1.5, SDXL, Stable Diffusion 3, Stable Diffusion 3.5 Large and FLUX (by Black Forest Labs new SOTA model) on your computer and also on the cloud (Massed Compute, RunPod and a free Kaggle account).
### November 20 - 2024
[Kohya FLUX Fine Tuning (Full Checkpoints) Training Full Tutorial For Local Windows and Cloud RunPod and Massed Compute](https://www.patreon.com/posts/112099700)
* Fully researched and optimized very best full FLUX Fine tuning configurations and workflows that works way better than FLUX LoRA training. Perfectly trainable on 6 GB, 8 GB, 10 GB, 12 GB, 16 GB, 24 GB and 48 GB GPUs. 48 GB GPU config and 6 GB GPU config yields exactly same quality only speed is different.
### November 20 - 2024
[Kohya FLUX LoRA Training Full Tutorial For Local Windows and Cloud RunPod and Massed Compute](https://www.patreon.com/posts/110879657)
* Step by step with perfect hyper parameters and configuration Kohya FLUX LoRA training tutorial with configs for 8GB, 10GB, 12GB, 16GB, 24GB and 48GB GPUs. It works perfect on all these GPUs both on Windows and on Linux. It covers Cloud services as RunPod and Massed Compute as well. This tutorial is research result of more than 64 full Kohya SS GUI used FLUX LoRA trainings
### November 16 - 2024
[1-Click Windows, RunPod, Massed Compute and Kaggle Installers For SDXL Background Replacement for Product Images - Make Amazing Product Ads For Shopify](https://www.patreon.com/posts/89914747)
* 1-Click Install Shopify Product Background Replacer (Open Source) And Make Amazing Ads Pictures For Your Products On Your Computer With SDXL. We have 1-Click installers for Windows, RunPod, Massed Compute and a ready to run Kaggle Notebook with super model downloading improvement technique as well.
### November 3 - 2024
[OmniGen 1-Click Automatic Installers for Windows, RunPod and Massed Compute](https://www.patreon.com/posts/115233922)
* 1-Click Python venv installers for OmniGen. The installers support Windows (use locally), RunPod and Massed Compute (use on cloud). OmniGen is a unified image generation model that can generate a wide range of images from multi-modal prompts. It is designed to be simple, flexible, and easy to use
### November 1 - 2024
[Perfect Quality Example Model Training Images Dataset - Can Be Used On FLUX, Stable Diffusion 3.5, 3, SDXL, SD 1.5 and Such](https://www.patreon.com/posts/114972274)
* If you are looking for an example model training dataset that is properly prepared, to learn how to prepare and test, this is the dataset! You can use this dataset to train your SD 1.5, SDXL, FLUX, Stable Diffusion 3, Stable Diffusion 3.5 Large models and see how it performs. You can analze this dataset and read the post to understand how to generate your perfect dataset for not only training a person but also a style, an item, and object and such.
### October 31 - 2024
[FLUX De-Distilled and Anti-Bleeding Fine-Tuning / DreamBooth & LoRA Training Experiments - Also Testing CFG Impact for Stylized Images](https://www.patreon.com/posts/114969137)
* Research for fixing FLUX bleeding / mixing problem. Training multiple concepts / subjects. Searching and Testing FLUX De-Distilled models. Aim is preventing model turning entirely into you or training multiple styles, objects, person, items, products at once without getting mixed / bled.
### October 2 - 2024
[SOTA Image Captioning Scripts For Stable Diffusion: CogVLM, CogVLM V2, Kosmos-2, Qwen-VL, LLaVA, BLIP-2, Clip-Interrogator (115 Clip Vision Models + 5 Caption Models)](https://www.patreon.com/posts/sota-image-for-2-90744385)
* 1-Click installers for Windows and Cloud (RunPod & Massed Compute) collection for LLaVA, Kosmos-2, Blip 2, CLIP Vision, CogVLM V1, CogVLM V2, Qwen-VL and CLIP Interrogator web APP. All APPs supports batch captioning as well. Amazing fast and easy to use with Gradio.
### September 27 - 2024
[1-Click CogVideoX-5b Image to Video Installers For Windows, RunPod, Massed Compute - SOTA Open Source Model](https://www.patreon.com/posts/112848192)
* 1-Click to install and use CogVideoX-5B SOTA image to guided video generation model on your PC locally or on RunPod and Massed Compute
### September 20 - 2024
[Image Captioning Editor Gradio APP - Edit Your Captions Super Easy Including Batch Editing - For Windows, RunPod and Massed Compute](https://www.patreon.com/posts/108992085)
* Extremely advanced and lots of features having image captioning Gradio APP developed by SECourses. 1-Click to install and run on Windows and also Cloud (RunPod & Massed Compute). Use this application to edit and finalize your Stable Diffusion training dataset image captions.
### September 13 - 2024
[Training a FLUX LoRA with 256 Images Experiments - Full Workflow and Conclusions](https://www.patreon.com/posts/111891669)
* Training a perfect FLUX LoRA that learns even broken teeth details, full body details, full expressions with 256 images experiments
### August 27 - 2024
[FLUX Models 1-Click Auto Downloaders for SwarmUI for Windows, RunPod and Massed Compute](https://www.patreon.com/posts/109289967)
* Auto downloader for FP16, FP8 and quantized FLUX models for Windows, RunPod, Massed Compute with T5 Text Encoder and FLUX VAE
### August 27 - 2024
[1-Click to install on Windows, RunPod and Massed Compute Kohya FLUX LoRA and Fine Tuning Training Full Tutorial For Local Windows and Cloud RunPod and Massed Compute](https://www.patreon.com/posts/110293257)
* Full Research and Development and Configs and Workflows and Grids shared article for FLUX LoRA training. The article contains configs for 8GB, 10GB, 12GB, 16GB, 24GB and 48GB GPUs. The configs even includes 4x GPU setup as well
### August 23 - 2024
[SOTA Subject Cropper and Face Focused Image Resizer Scripts Do Better Training](https://www.patreon.com/posts/sota-subject-and-88391247)
* State Of The Art (SOTA) Subject Cropper (Zoom Subject Without Quality Loss) and SOTA Image Downscaler To Get Perfect Desired Resolution. These scripts will significantly improve your training quality.
### August 19 - 2024
[ResShift 1-Click Windows, RunPod, Massed Compute, Kaggle Installers with Amazing Gradio APP and Batch Image Processing](https://www.patreon.com/posts/110331752)
* 1-Click installer scripts for ResShift for Windows, RunPod, Massed Compute and Kaggle and a very advanced Gradio app with batch processing. ResShift is Efficient Diffusion Model for Image Super-resolution by Residual Shifting (NeurIPS 2023, Spotlight)
### August 17 - 2024
[The Very Best Workflow For SDXL DreamBooth / Full Fine Tuning - Results Of 100+ Full Trainings](https://www.patreon.com/posts/very-best-for-of-89213064)
* Updated the very best hyper training parameters / configuration and training workflow for Kohya SS GUI for Stable Diffusion XL (SDXL)
### August 16 - 2024
[OneTrainer Stable Diffusion XL (SDXL) Fine Tuning Best Presets](https://www.patreon.com/posts/96028218)
* Nerogar OneTrainer very best Stable Diffusion XL (SDXL) full fine tuning presets. 10.3 GB GPU is very sufficient and fast
### August 14 - 2024
[Image Folders Merger For Easy Comparison - Merge Images Side by Side - Useful to Compare Upscaling and Such Effect](https://www.patreon.com/posts/110108419)
* A Python script to upscale lower resolution image in folder A and B into higher resolution one without any upscale algorithm and merge them
### July 19 - 2024
[Auto Windows Installer For Würstchen: Fast Diffusion for Image Generation](https://www.patreon.com/posts/auto-windows-for-89265135)
* Install latest Generative AI model Würstchen V2 to your computer with 1 click. Fixed file instead of broken Gradio demo hosted on Hugging Face
### June 24 - 2024
[Fooocus Stable Diffusion Web UI Kaggle NoteBook](https://www.patreon.com/posts/fooocus-stable-94269866)
* Fooocus Stable Diffusion Web UI Free Kaggle Account Notebook. Use SDXL on Kaggle for free like Midjourney without even a computer. This is the way to use almost Midjourney for free.
### June 21 - 2024
[Tencent AI Lab - V-Express Image to Animation Gradio Web APP and 1-Click Installers for Windows, Massed Compute, RunPod and Kaggle](https://www.patreon.com/posts/105251204)
* 1-Click to turn your static image into a full animation talking video either by an input audio or video file via Tencent AI Lab - V-Express - Open Source D-ID and alikes
### June 14 - 2024
[All Amazing Styles Of Fooocus For Automatic1111 SD Web UI and StableSwarmUI also necessary Scripts to generate them](https://www.patreon.com/posts/95143823)
* 275 amazing Fooocus SDXL styles in format of Automatic1111 SD Web UI and also as a Preset for StableSwarmUI with thumbnail preview images. Moreover, full python scripts to generate and update these styles and presets files
### June 13 - 2024
[Find And Move Duplicate or Very Similar Images By Using imagehash - Batch Processing Super Fast](https://www.patreon.com/posts/find-and-move-or-95143007)
* If you want to find duplicate or near duplicate images very fast, this script is what you are looking for. It analyzes the content of images so works amazingly
### June 11 - 2024
[1-Click Installers for CodeFormer: Robust Face Restoration and Enhancement Network, Windows, RunPod, Massed Compute, Linux, Kaggle](https://www.patreon.com/posts/104691847)
* 1-Click auto installers for CodeFormer standalone Gradio APP with advanced features. The installers includes Windows, Massed Compute, Linux, Kaggle and RunPod. You can use on a free Kaggle account as well with our Kaggle notebook. This app also has batch folder processing feature and works many times better than Automatic1111 SD Web UI
### June 7 - 2024
[Massed Compute Automatic1111 and Forge Web UI Installers for ADetailer, ControlNet, TensorRT, Reactor, FaceFusion](https://www.patreon.com/posts/105735932)
* 1-Click installers for latest version of Automatic1111 Web UI, SD Forge Web UI, ControlNet, TensorRT, Reactor, FaceFusion, ADetailer. Moreover, the virtual machine comes by default with OneTrainer, Kohya, Pinokio AI APPs installed and also update them with 1-Click.
### June 6 - 2024
[1 Click Installer for Automatic1111 SD Web UI, SDXL, ControlNet, All ControlNet Models, TensorRT (RTX Accelerator) For RunPod / Any Linux System](https://www.patreon.com/posts/86438018)
* Automatic RunPod (any Linux System) installer script for SDXL and Automatic1111 Web UI. Downloads latest SDXL base with fixed VAE and best SD 1.5 and SDXL models. Moreover it automatically installs and lets you download newest NVIDIA RTX Accelerator - TensorRT which brings 70%+ Speed Up. Moreover, it will automatically install ControlNet and download all available ControlNet models for you. Furthermore, it will auto install After Detailer (ADetailer) and Reactor extensions and latest Torch and xFormers. All these installations are optional and you can install any of them.
### May 27 - 2024
[IP-Adapter-FaceID-PlusV2 - 0 Shot Face Transfer - Auto Installer & Gradio App](https://www.patreon.com/posts/ip-adapter-0-app-95759342)
* 1 Click Auto Install IP-Adapter-FaceID-PlusV2. Use it with an advanced standalone Gradio app. 0 Shot face transfer and generate images.
### May 25 - 2024
[Run Automatic1111 SD Web UI On A Free Kaggle NoteBook Like In Your PC - Supports SDXL & ControlNet](https://www.patreon.com/posts/run-on-free-like-88714330)
* A Free Kaggle account notebook to use Automatic1111 for free. Supports SDXL, ControlNet, LoRA, trained LoRAs & automatic extension install. Works like you have a very strong computer. Dual 15 GB GPUs, 29 GB RAM provided for free by Kaggle. Auto downloads all of the ControlNet models for both SD 1.5 and SDXL models including even IP Adapter Face ID Plus and InstantID
### May 20 - 2024
[Massed Compute Installers - Upgrade Automatic1111 - Coupon Code - ControlNet - ADetailer - Facefusion - Reactor & More](https://www.patreon.com/posts/101386817)
* Massed Compute Scripts & Coupon Code: A6000 GPU for 31 Cents Per Hour, Automatic1111 SD Web UI, Kohya, OneTrainer, After Detailer (ADetailer), Reactor, Facefusion, Forge & More
### May 16 - 2024
[Stable Cascade 1 Click Installer & Advanced Gradio APP For Windows, Massed Compute, RunPod, Linux & Kaggle](https://www.patreon.com/posts/stable-cascade-1-98410661)
* 1 Click To Install Stable Cascade Model & Use On Your PC or On RunPod or On Massed Compute or Kaggle With Amazing Optimizations (Works on 5GB GPU) & Advanced GUI
### April 28 - 2024
[Fooocus SD Web UI RunPod & Massed Compute Auto Installer - 1 Click - Latest Version](https://www.patreon.com/posts/fooocus-sd-web-1-92759045)
* Automatic installer for Stable Diffusion Fooocus Web UI on RunPod and also Massed Compute. 1 Click. Use Fooocus on RunPod & Massed Compute with all models and features. Latest version. Follow the instructions on the Patreon post.
### April 18 - 2024
[For RunPod - Automatic Kohya SS LoRA Installer](https://www.patreon.com/posts/for-runpod-kohya-84898806)
* This script will automatically install Kohya SS on RunPod. Additionally, I have added after Pod restart script which will fix installation.
### March 22 - 2024
[The Very Best OneTrainer Workflow & Config For SD 1.5 Based Models DreamBooth / Full Fine Tuning](https://www.patreon.com/posts/very-best-config-97381002)
* Download the very best training configuration and learn the workflow for the OneTraine GUI Stable Diffusion trainer & obtain amazing quality. The workflow is discovered after 70 empricial model trainings.
### March 20 - 2024
[Bit-By-Bit Disk & File Verification Software In C#, Fully Multi-Threaded, With Full Source Code - Verify Disk Clone](https://www.patreon.com/posts/76398813)
* This in C# developed application is extremely efficient to verify every bit of cloned disks. It can be also used for file migration/backup verification. Full source code available with pre-compiled exe file. It is fully multi-threaded.
### Feburary 15 - 2024
[1 Click Auto Windows Installer For Rerender A Video - 1 Click Video To Anime](https://www.patreon.com/posts/89457537)
* Rerender is an amazing new Paper that allows you to turn videos into Anime with 1 click. Auto install scripts and instructions provided here
### Feburary 5 - 2024
[1 Click Auto RunPod Installer For Rerender A Video - 1 Click Video To Anime](https://www.patreon.com/posts/1-click-auto-for-91039997)
* Rerender is an amazing new AI that allows you to turn videos into Anime with 1 click. RunPod auto install scripts and instructions are here.
### January 29 - 2024
[The Very Best Kohya GUI Workflow & Config For SD 1.5 Based Models DreamBooth / Full Fine Tuning](https://www.patreon.com/posts/very-best-kohya-97379147)
* Download the very best training configuration and learn the workflow for the Kohya SS GUI Stable Diffusion trainer & obtain amazing quality. The workflow is discovered after 70 empricial model trainings.
### January 23 - 2024
[Download 160 Very Best Stable Diffusion 1.5 Based (SD 1.5) Models With 1 Click](https://www.patreon.com/posts/96666744)
* 1 click Download the very best 160+ Stable Diffusion 1.5 models (SD 1.5) from CivitAI and Hugging Face into your PC or RunPod or Cloud.
### January 16 - 2024
[PixArt-alpha (PixArt-α) Automatic Installer For Both Windows And RunPod With Additional Web UI Features](https://www.patreon.com/posts/pixart-alpha-for-93614549)
* Auto installer scripts with an Advanced Web Gradio APP to install and use PIXART-α (PixArt-alpha - SDXL Rival) for both Windows and RunPod.
### January 14 - 2024
[Tortoise TTS Fast (tortoise-tts-fast) Windows Auto Installer BAT Script](https://www.patreon.com/posts/tortoise-tts-tts-90496485)
* 1 Click installer for tortoise-tts-fast on Windows. It will make its own VENV so will not affect any other AI apps such as Stable Diffusion.
### January 1 - 2024
[Magic Animate Automatic Installer and Video to DensePose Auto Converter For Windows And RunPod](https://www.patreon.com/posts/94098751)
* Automatically install magic-animate on both Windows and RunPod. Also automatically generate DensePose from raw videos via best detectron2. Includes a standalone CodeFormer Gradio Web APP too for improving faces in videos fully automatically.
### December 23 - 2023
[Batch Image Metadata Generator - Extremely Useful For Automatic1111 SD Web UI](https://www.patreon.com/posts/95176238)
* If you want to batch generate Metadata of images with just one click, this is the script you are looking for. Extremely useful for SD Web UI
### December 23 - 2023
[All Amazing Styles Of Fooocus As Automatic1111 SD Web UI Styles File And Styles File Generator](https://www.patreon.com/posts/all-amazing-of-95143823)
* 275 Amazing Fooocus Styles in a single Styles.csv file compatible with Automatic1111 and Styles.csv generator for Fooocus styles folder.
### December 4 - 2023
[Auto Installer For AudioCraft Plus - MusicGen - AudioGen - An All-in-One AudioCraft WebUI](https://www.patreon.com/posts/ai-music-auto-84334460)
* Auto Installer Windows Bat Files For AudioCraft Plus - MusicGen - AudioGen - An All-in-One AudioCraft WebUI - Facebook Research / Audiocraft
### November 27 - 2023
[Massive 4K Resolution Woman & Man Class Ground Truth Stable Diffusion Regularization Images Dataset](https://www.patreon.com/posts/massive-4k-woman-87700469)
* 4K+ resolution 5200 images for each gender Hand Picked Ground Truth Real Man & Woman Regularization Images For Stable Diffusion & SDXL Training - 512px 768px 1024px 1280px 1536px and more
### November 25 - 2023
[SOTA (The Very Best) Image Captioning Models Script For Stable Diffusion And More](https://www.patreon.com/posts/sota-very-best-90744385)
* 1 Click install and use SOTA image captioning models on your computer. Supports 8 bit loading as well. 90+ CLIP Vision and 5+ Caption models. Supports laion/CLIP-ViT-bigG-14-laion2B-39B-b160k too. Supports total 115 Clip and 5 Caption model combination.
### November 20 - 2023
[Image Validator Script For Training - Moves Corrupted Images](https://www.patreon.com/posts/image-validator-85618765)
* This attached below script will test each one of your images and moves the ones that are corrupted (breaking training) into another folder. Another script will scan and log but not move.
### November 17 - 2023
[Automatic ControlNet Installer And Downloader For Windows BAT File](https://www.patreon.com/posts/84875387)
* Scripts will clone ControlNet repo and download all of the ControlNet models with SDXL into the correct folder automatically for Windows
### November 12 - 2023
[Gender Classifier - Low Colors & Multiple Face Remover - Stable Diffusion Training Images Preprocessor](https://www.patreon.com/posts/92607385)
* Gender Classifier - Low Colors & Multiple Face Remover - Stable Diffusion Training Images Preprocessor. Utilizes SOTA models and techniques. Supports GPU Retina Face too.
### November 9 - 2023
[Automatic ControlNet Installer / Updater - Model Downloader For RunPod](https://www.patreon.com/posts/84896373)
* This script will update ControlNet extension to its latest version and also automatically download all model files of ControlNet
### November 6 - 2023
[Auto Installer Bat Files For Automatic1111 & DreamBooth Extension On Windows](https://www.patreon.com/posts/auto-installer-84773926)
* Included BAT script files will clone and install fully automatically Automatic1111 SD Web UI and DreamBooth extension for you on Windows.
### October 28 - 2023
[RunPod Auto DreamBooth Extension Of Automatic1111 Web UI & Latest Libraries Installer Script](https://www.patreon.com/posts/runpod-auto-84716845)
* This script will install working version of DreamBooth extension of Automatic1111 Web UI fully automatically for you on RunPod.
### October 24 - 2023
[Automatic1111 Web UI Google Colab NoteBook With All ControlNet Models And More](https://www.patreon.com/posts/automatic1111-ui-89288738)
* Automatic1111 Web UI Google Colab Notebook With All ControlNet Models, SDXL Model, Best SD 1.5 Model, LoRA Download Example, Upscaler, SDXL LoRAs, SDXL ControlNet All Models & More
### October 5 - 2023
[Amazing Prompt List For DreamBooth or LoRA Trained Stable Diffusion XL (SDXL) & SD 1.5 Based Models](https://www.patreon.com/posts/amazing-prompt-1-90346033)
* Specially crafted very best Stable Diffusion XL (SDXL) + SD 1.5 based models prompt list for DreamBooth and LoRA trained models.
### September 14 - 2023
[Google Colab Notebook For Würstchen: Fast Diffusion for Image Generation](https://www.patreon.com/posts/google-colab-for-89280042)
* Würstchen V2 model on a free Google Colab Notebook with instructions. Super quality Generative AI like Stable Diffusion XL (SDXL) but faster
### September 12 - 2023
[How To Start Multiple Automatic1111 Web UI And Kohya Training On A Single Pod](https://www.patreon.com/posts/how-to-start-web-89150521)
* Download webui-user.sh and relauncher.py files and follow instructions to start multiple Automatic1111 on different GPUs on a single RunPod
### August 13 - 2023
[Convert Very Long X/Y/Z Plot Output Images Into Square Grids](https://www.patreon.com/posts/convert-very-x-y-87608128)
* A script to convert very long X/Y/Z Plot image into a chunked Square Grid Image. Examples are attached.
### August 8 - 2023
[1 Click RunPodCTL Installer .bat File - Script](https://www.patreon.com/posts/1-click-bat-file-87505171)
* 1 Click installer for runpodctl. runpodctl is super fast to upload and download files between pod to pc, pc to pod and pod to pod.
### August 1 - 2023
[How To Get Amazing Prompts With ChatGPT For Stable Diffusion](https://www.patreon.com/posts/how-to-get-with-87038686)
* How to Utilize free ChatGPT to write unlimited number of different prompts for Stable Diffusion models. 540 prompts attached
### July 29 - 2023
[SDXL Kohya LoRA Training With 12 GB VRAM Having GPUs - Tested On RTX 3060](https://www.patreon.com/posts/sdxl-kohya-lora-86817035)
* How to do SDXL Kohya LoRA training with 12 GB VRAM having GPUs. Rank 8, 16, 32, 64, 96 VRAM usages are tested and shown. Config provided
### July 27 - 2023
[How To Fix Artifacts In The SDXL 1.0 VAE - Hidden Watermark System](https://www.patreon.com/posts/86736816)
* How to get rid off embedded watermarking system in SDXL 1.0 VAE. We will use new VAE. How to use proper VAE with SDXL for best quality.
### July 12 - 2023
[1k Resolution Class Images & Direct SDXL Download Links](https://www.patreon.com/posts/1k-resolution-85976249)
* 1024x1024 Pixels Class Images (From Real Pictures) For Amazing Realism For SDXL and Direct SDXL 0.9 and 1.0 Download Links (Official Source)
### July 10 - 2023
[Auto SDXL RunPod Installer Script - 1 Click ](https://www.patreon.com/posts/auto-sdxl-runpod-85845581)
* 1 Click SDXL Installer Script for RunPod. Working amazing. Use high VRAM GPUs for amazing speed. You don't need token I did set it for you.
### July 6 - 2023
[Auto Installer Script (.bat) Files For Stable Diffusion XL (SDXL) On Your PC](https://www.patreon.com/posts/auto-installer-85678961)
* Attached script files will automatically download and install SD-XL 0.9 into your computer and let you use SDXL locally for free as you wish
### July 4 - 2023
[Best Settings For The END of Photography - Use AI to Make Your Own Studio Photos, FREE Via DreamBooth Training](https://www.patreon.com/posts/best-settings-of-85192985)
* Screenshots of best settings for : The END of Photography - Use AI to Make Your Own Studio Photos, FREE Via DreamBooth Training
### June 21 - 2023
[How to fix Roop (insightface error) - cannot open include file: 'stdio.h': No such file or directory](https://www.patreon.com/posts/how-to-fix-roop-84932008)
* This post will show you how to fix insightface wheel compiling error when installing Roop on Windows. 'stdio.h': No such file or directory
### June 20 - 2023
[Auto Installer Bat File For Latest cuDNN dll files & How To Manually Install & Update](https://www.patreon.com/posts/auto-installer-84830198)
* Attached bat file will automatically download 8.9.2.26 cuDNN dll files and replace the ones that comes with default Torch installation
### June 16 - 2023
[Core i7 10700F vs Core i9 13900K](https://www.patreon.com/posts/core-i7-10700f-84640971)
* Core i7 10700F vs Core i9 13900K results are shown in the image. Alternatively you can watch the youtube video to see them.
### June 9 - 2023
[2 Pre-Processing Scripts And 3 Datasets Of Processed Class Images For Popular Models](https://www.patreon.com/posts/84292083)
* Video Tutorial for this post : https://youtu.be/olX1mySE8HA. Batch preprocess images. Removes multiple-face, black & white, NSWF. Free datasets
### June 6 - 2023
[Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Favorite Movie Star! PC & Google Colab - roop](https://www.patreon.com/posts/mind-blowing-pc-84169579)
* Full video of: Mind-Blowing Deepfake Tutorial: Turn Anyone into Your Favorite Movie Star! PC & Google Colab - roop
### June 4 - 2023
[4K 2700 Real Class Images + Auto Cropping Script](https://www.patreon.com/posts/4k-2700-real-84053021)
* 4K res 2700 Class / Reg raw images. Subject auto cropper script. Included 512, 640, 768, 960, 1024px prepared. Can be used for fine-tuning
### May 28 - 2023
[How To Generate Very Long Text To Speech For Free On Cloud, e.g. Audiobook](https://www.patreon.com/posts/how-to-generate-83649203)
* Generate a very long text to speech with a single click on cloud for free. Example Audiobook : https://www.youtube.com/watch?v=5dSiuBjVcdk
### May 8 - 2023
[Voice Clone Tutorial Scripts](https://www.patreon.com/posts/voice-clone-82712205)
* As shown in the tutorial video, the scripts I have developed make voice cloning and text-to-speech synthesis much easier and more efficient.
### April 30 - 2023
[Enhanced DeepFloyd-IF Kaggle Notebook File](https://www.patreon.com/posts/enhanced-if-file-82253574)
* Enhanced DeepFloyd-IF Kaggle Notebook File as shown in the tutorial video.
### April 26 - 2023
[Realistic Vision V2 - 2071 classification / regularization images](https://www.patreon.com/posts/realistic-vision-82085317)
* Realistic Vision V2 - 2071 classification / regularization images
### April 26 - 2023
[Kohya SS LoRA Amazing Studio Quality Photoshoot Tutorial PDF](https://www.patreon.com/posts/kohya-ss-lora-82085260)
* The attached PDF file will be updated once the tutorial is finished and published.
### April 18 - 2023
[Kandinsky 2.1 For FREE Google Colab Account - Save in Drive, Batch Processing, Dynamic Prompting](https://www.patreon.com/posts/82085260)
* Kohya SS Tutorial as PDF file
### April 11 - 2023
[Summary And Conclusions of RTX 3090 vs RTX 3060 Ultimate Showdown for Stable Diffusion, ML, AI & Video Rendering Performance](https://www.patreon.com/posts/summary-and-of-81374648)
* You can download the summary, discoveries and the conclusions PDF file of the video : https://youtu.be/lgP1LNnaUaQ RTX 3090 vs RTX 3060 Ulti
### April 6 - 2023
[Kandinsky 2 Tutorial And Script](https://www.patreon.com/posts/kandinsky-2-and-81107231)
* The tutorial link is here : https://youtu.be/dYt9xJ7dnpU My modified improved notebook file is attached. I may update it time to time. This
### April 6 - 2023
[Custom Style Teached New Model](https://www.patreon.com/posts/custom-style-new-81107154)
* This is a custom model that I have trained a certain style as you see in the picture. You can use it as you wish.
### April 2 - 2023
[How To Quickly Upload Your RunPod Files To Google Drive](https://www.patreon.com/posts/how-to-quickly-80924234)
* By using the following Google Colab Notebook link you can very quickly upload your files (e.g. models or folders) to your Google Drive.
### March 27 - 2023
[10598 Aesthetic and 6080 Photo Of Man classification images](https://www.patreon.com/posts/10598-aesthetic-80588052)
* You can download from below links 10598 aesthetic and 6080 photo of classification images. You can use these images as regularization / clas
### March 22 - 2023
[Midjourney Level Style Trained Model](https://www.patreon.com/posts/midjourney-level-80356527)
* This is the video tutorial : https://youtu.be/m-UVVY_syP0 . Safetensors model file below. This model do not include myself - only style
### March 19 - 2023
[Style Teaching & Aesthetic Dataset](https://www.patreon.com/posts/style-teaching-80233878)
* 2858 Style training images dataset prepared by me with the following words and certain prompt usage : https://drive.google.com/file/d/1A
### January 28 - 2023
[How To Achieve Synchronization In C# While Doing Async Await Multithreaded Programming - .NET Core](https://www.patreon.com/posts/how-to-achieve-c-77858916)
* Thank you so much for supporting us. Source code available in attachments.
|
oyvindbs/setfit-minister-party-politician-nb-sbert-base
|
oyvindbs
| 2025-08-06T10:45:18Z | 1 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"Norway",
"Cabinet Ministers",
"no",
"nb",
"arxiv:2209.11055",
"base_model:NbAiLab/nb-sbert-base",
"base_model:finetune:NbAiLab/nb-sbert-base",
"region:us"
] |
text-classification
| 2025-06-30T08:50:56Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
- Norway
- Cabinet Ministers
widget: []
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: NbAiLab/nb-sbert-base
language:
- 'no'
- nb
---
# Role: Party Politician
This model has been trained on Facebook posts by Norwegian cabinet ministers of the Solberg governments (2013-2021). It was used in Karlsen, Kolltveit and Solheim (2025).
The posts were hand coded specifying different roles and purposes of the posts.
Below, we recreate the table 1 from the paper showing the five roles and four purposes. The model included here identifies posts where the cabinet ministers take the role of **Party Politician**.
The setfit models that identify the other roles and purposes are available [here](https://huggingface.co/collections/oyvindbs/balancing-acts-the-communicative-roles-of-cabinet-ministers-68624b72c250c3cc1fd3ea14).
In the paper, we use one model for each purpose and each role. Each post can accordingly be ascribed to more than one purpose or role.
| | Communicative purposes | | | |
|------------------------------|-------------------------------|----------------------|-------------------|-----------------|
| **Communicative roles** | Informing | Communication | Mobilizing | Branding |
| Ministry head | | | | |
| Cabinet member | | | | |
| *Party politician* | | | | |
| Individual politician | | | | |
| Private person | | | | |
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification of Norwegian social media posts. It uses [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
It has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 75 tokens
- **Number of Classes:** 1
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
**Language:**
* Norwegian (Bokmål)
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("oyvindbs/setfit_minister_nb-sbert-base_Ministry-Head")
# Run inference
preds = model("I loved the spiderman movie!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.4
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.50.1
- PyTorch: 2.5.1+cu118
- Datasets: 2.19.0
- Tokenizers: 0.21.0
## Citation
```bibtex
@article{KarlsenKolltveitSolheim,
author = {Karlsen, Rune and Kolltveit, Kristoffer and Solheim, Øyvind Bugge},
title = {Balancing Acts: The communicative roles of cabinet ministers on social media},
publisher = {Media and Communication},
year = {2025}
}
```
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Leonardo6/clip-imagenet-finetuned
|
Leonardo6
| 2025-08-06T10:05:04Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"base_model:openai/clip-vit-large-patch14-336",
"base_model:finetune:openai/clip-vit-large-patch14-336",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2025-08-06T08:22:04Z |
---
library_name: transformers
base_model: openai/clip-vit-large-patch14-336
tags:
- generated_from_trainer
model-index:
- name: clip-imagenet-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-imagenet-finetuned
This model is a fine-tuned version of [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1024
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4096
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.53.3
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
AXERA-TECH/Qwen2.5-7B-Instruct
|
AXERA-TECH
| 2025-08-06T10:03:16Z | 15 | 0 |
transformers
|
[
"transformers",
"Context",
"Qwen2.5-7B-Instruct-GPTQ-INT8",
"Qwen2.5-7B-Instruct-GPTQ-INT4",
"text-generation",
"zh",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T14:50:14Z |
---
license: mit
language:
- zh
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct-GPTQ-INT8
- Qwen/Qwen2.5-7B-Instruct-GPTQ-INT4
pipeline_tag: text-generation
library_name: transformers
tags:
- Context
- Qwen2.5-7B-Instruct-GPTQ-INT8
- Qwen2.5-7B-Instruct-GPTQ-INT4
---
# Qwen2.5-7B-Instruct
This version of Qwen2.5-7B-Instruct has been converted to run on the Axera NPU using **w8a16** and **w4a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.1
## Feature
- Support for longer contexts, in this sample it's 2k
- Support context dialogue
- System prompt kvcache is supported
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through:
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU AXEngine LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-context)
[AXera NPU AXCL LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/axcl-context)
### Convert script
The follow show how to convert Qwen2.5-7B-Instruct-GPTQ-Int4
```
pulsar2 llm_build --input_path Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4 \
--output_path Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4-ctx-ax650 \
--hidden_state_type bf16 --kv_cache_len 2047 --prefill_len 128 \
--last_kv_cache_len 128 \
--last_kv_cache_len 256 \
--last_kv_cache_len 384 \
--last_kv_cache_len 512 \
--last_kv_cache_len 640 \
--last_kv_cache_len 768 \
--last_kv_cache_len 896 \
--last_kv_cache_len 1024 \
--chip AX650 -c 1 --parallel 8
```
## Support Platform
- AX650
- AX650N DEMO Board
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M4N-HAT](https://wiki.sipeed.com/hardware/zh/maixIV/m4nhat/intro.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
|Chips|w8a16|w4a16| DDR(w8) | Flash(w8) | DDR(w4) | Flash(w4) |
|--|--|--|--|--|--|--|
|AX650| 2.8 tokens/sec| 5.0 tokens/sec | | | 5.2GB | 5.7GB |
## How to use
Download all files from this repository to the device
```
(base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ tree -L 1
.
├── config.json
├── main_api
├── main_api_ax650
├── main_api_axcl_aarch64
├── main_api_axcl_x86
├── main_ax650
├── main_axcl_aarch64
├── main_axcl_x86
├── post_config.json
├── qwen2.5-7b-ctx-int4-ax650
├── qwen2.5_tokenizer
├── qwen2.5_tokenizer_uid.py
├── README.md
├── run_qwen2.5_7b_ctx_ax650.sh
├── run_qwen2.5_7b_ctx_int4_ax650.sh
├── run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh
└── run_qwen2.5_7b_ctx_int4_axcl_x86.sh
3 directories, 15 files
```
#### Start the Tokenizer service
```
(axcl) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ python qwen2.5_tokenizer_uid.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Server running at http://0.0.0.0:12345
```
#### System prompt cache
- The System prompt can be preset through the configuration file from `--system_prompt`
- The System prompt can be cached in the form of kv cache to a specified folder for quick loading at the next run time from `--kvcache_path`
- This folder needs to be created manually before running, for example `mkdir kvcache`
```
(base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ cat run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh
./main_axcl_aarch64 \
--template_filename_axmodel "qwen2.5-7b-ctx-int4-ax650/qwen2_p128_l%d_together.axmodel" \
--axmodel_num 28 \
--url_tokenizer_model "http://0.0.0.0:12345" \
--filename_post_axmodel "qwen2.5-7b-ctx-int4-ax650/qwen2_post.axmodel" \
--filename_tokens_embed "qwen2.5-7b-ctx-int4-ax650/model.embed_tokens.weight.bfloat16.bin" \
--tokens_embed_num 152064 \
--tokens_embed_size 3584 \
--use_mmap_load_embed 1 \
--live_print 1 \
--devices 0
#--system_prompt "你的名字叫小智(allen),你是一个人畜无害的AI助手。深圳市今天(4月1日)阴天,愚人节,气温在14°C至19°C之间,微风。" \
#--kvcache_path "./kvcache" \
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
TODO
#### Inference with M.2 Accelerator card
[What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5.
```
(base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ ./run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh
[I][ Init][ 130]: LLM init start
[I][ Init][ 34]: connect http://0.0.0.0:12345 ok
[I][ Init][ 57]: uid: ae9adea5-c64e-47df-92ca-29cbcc5a865f
bos_id: -1, eos_id: 151645
3% | ██ | 1 / 31 [0.49s<15.16s, 2.04 count/s] tokenizer init ok[I][ Init][ 45]: LLaMaEmbedSelector use mmap
6% | ███ | 2 / 31 [0.49s<7.59s, 4.08 count/s] embed_selector init ok
[I][ run][ 30]: AXCLWorker start with devid 0
54% | ████████████████████████████ █ █ █ ██ ██ | 17 / 31 [39.92s<77.35s, 0.40 count/s] init 24 axmodel ok,devid(0) remain_cmm(-1 MB) | 16 / 31 [39.92s<77.35s,100% | ████████████████████████████████ | 31 / 31 [80.60s<83.29s, 0.37 count/s] init post axmodel ok,remain_cmm(1324 MB)1891 MB)
[I][ Init][ 221]: max_token_len : 2047
[I][ Init][ 224]: kv_cache_size : 512, kv_cache_num: 2047
[I][ Init][ 232]: prefill_token_num : 128
[I][ Init][ 236]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 236]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 236]: grp: 3, prefill_max_token_num : 256
[I][ Init][ 236]: grp: 4, prefill_max_token_num : 384
[I][ Init][ 236]: grp: 5, prefill_max_token_num : 512
[I][ Init][ 236]: grp: 6, prefill_max_token_num : 640
[I][ Init][ 236]: grp: 7, prefill_max_token_num : 768
[I][ Init][ 236]: grp: 8, prefill_max_token_num : 896
[I][ Init][ 236]: grp: 9, prefill_max_token_num : 1024
[I][ Init][ 240]: prefill_max_token_num : 1024
________________________
| ID| remain cmm(MB)|
========================
| 0| 1324|
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": true,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 10,
"top_p": 0.8
}
[I][ Init][ 263]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[I][ GenerateKVCachePrefill][ 324]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2
[I][ GenerateKVCachePrefill][ 367]: input_num_token:21
[I][ main][ 234]: precompute_len: 21
[I][ main][ 235]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant.
prompt >> nice
[I][ SetKVCache][ 614]: prefill_grpid:2 kv_cache_num:128 precompute_len:21 input_num_token:9
[I][ SetKVCache][ 617]: current prefill_max_token_num:896
[I][ Run][ 855]: input token num : 9, prefill_split_num : 1
[I][ Run][ 887]: input_num_token:9
[I][ Run][1016]: ttft: 928.08 ms
Nice to meet you! If you have any questions or need some help, feel free to ask.
[N][ Run][1168]: hit eos,avg 4.36 token/s
[I][ GetKVCache][ 583]: precompute_len:50, remaining:974
prompt >> q
[I][ run][ 80]: AXCLWorker exit with devid 0
(base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $
```
|
larry-cn/distilbert-rotten-tomatoes
|
larry-cn
| 2025-08-06T09:57:46Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-06T09:48:00Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.54.1
- Pytorch 2.7.1
- Datasets 4.0.0
- Tokenizers 0.21.4
|
tencent/Hunyuan-7B-Instruct-AWQ-Int4
|
tencent
| 2025-08-06T09:50:22Z | 186 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-4B-Pretrain",
"base_model:quantized:tencent/Hunyuan-4B-Pretrain",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2025-07-31T08:33:17Z |
---
base_model:
- tencent/Hunyuan-4B-Pretrain
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/organization/Tencent-Hunyuan"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers. We will merge it into the main branch later.
```SHELL
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|
bubblspace/Timecapsule2.7B-g3n-mix-match-gguf
|
bubblspace
| 2025-08-06T09:36:48Z | 6 | 1 | null |
[
"gguf",
"gemma3n",
"matformer",
"fp16",
"multimodal",
"mixnmatch",
"timecapsule",
"timecapsule-slm",
"gemm3nchallenge",
"bubblspace",
"base_model:google/gemma-3n-E4B-it",
"base_model:quantized:google/gemma-3n-E4B-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T08:26:12Z |
---
license: gemma
base_model:
- google/gemma-3n-E4B-it
tags:
- gemma3n
- matformer
- gguf
- fp16
- multimodal
- mixnmatch
- timecapsule
- timecapsule-slm
- gemm3nchallenge
- bubblspace
---
# TimeCapsule Gemma 3n 2.7B Slice (FP16 GGUF)
This model is a **2.7 B parameter** sub‑model of Gemma 3n, created using the **MatFormer (Matryoshka Transformer)** architecture and the **Mix‑n‑Match slicing** approach. It was sliced from the E4B checkpoint using the official E2.69B (layer‑level) configuration.
---
## 🧠 Intended Use
- Primary use: High‑precision inference with Ollama via FP16 GGUF.
- Best suited for: [TimeCapsule‑SLM](https://timecapsule.bubblspace.com/) deep‑research workflows where latency, accuracy, and compute tradeoffs matter.
---
## ⚠️ Limitations & Considerations
- Derived from a larger model — may not match the full E4B model in some evaluations.
- Operates in FP16 precision — requires hardware (like A100/GPU or Ollama host) with FP16 support.
- No additional quantization applied, preserving accuracy at some memory cost.
---
## 🛠 Creation Details
- **Parent model**: `google/gemma-3n-E4B-it`
- **Slice configuration**: `Config for E2.69B (layer-level)` from the official slicing-configs dataset
- Converted from `.safetensors` to **FP16 GGUF** using `llama.cpp`’s `convert_hf_to_gguf.py`
- Uploaded to this repository as: `tc_mixmatch_f16.gguf`
---
## 🧪 Usage Example
```bash
ollama run hf.co/bubblspace/Timecapsule2.7B-g3n-mix-match-gguf
|
crystalline7/829493
|
crystalline7
| 2025-08-06T09:27:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:27:54Z |
[View on Civ Archive](https://civitaiarchive.com/models/824526?modelVersionId=922027)
|
crystalline7/905426
|
crystalline7
| 2025-08-06T09:27:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:27:26Z |
[View on Civ Archive](https://civitaiarchive.com/models/890867?modelVersionId=996897)
|
crystalline7/1125169
|
crystalline7
| 2025-08-06T09:27:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:27:10Z |
[View on Civ Archive](https://civitaiarchive.com/models/1086251?modelVersionId=1219821)
|
crystalline7/1171839
|
crystalline7
| 2025-08-06T09:25:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:25:36Z |
[View on Civ Archive](https://civitaiarchive.com/models/1093251?modelVersionId=1266834)
|
ddore14/RooseBERT-scr-uncased
|
ddore14
| 2025-08-06T09:24:09Z | 4 | 0 | null |
[
"safetensors",
"bert",
"nlp",
"political debates",
"en",
"arxiv:2508.03250",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-07-07T15:11:47Z |
---
license: cc-by-nc-sa-4.0
language:
- en
metrics:
- perplexity
base_model:
- google-bert/bert-base-uncased
tags:
- nlp
- political debates
---
# RooseBERT-scr-uncased
This model is a fine-tuned version of bert-base-cased.
It achieves the following results on the evaluation set:
- Loss: 0.9922
- Accuracy: 0.7639
- Perplexity 2.848
## Model description
This model builds on the same architecture as `bert-base-uncased`, leveraging transformer-based contextual embeddings to better understand the nuances of political language.
## Intended Use Cases
### Suitable Applications
- **Political discourse analysis**: Identifying patterns, sentiments, and rhetoric in debates.
- **Contextual word interpretation**: Understanding the meaning of words within political contexts.
- **Sentiment classification**: Differentiating positive, neutral, and negative sentiments in political speech.
- **Text generation improvement**: Enhancing auto-completions and summaries in politically focused language models.
### Limitations
- **Bias Sensitivity**: Since it was trained on political debates, inherent biases in the data may be reflected in the model’s outputs.
- **Not Suitable for General-Purpose NLP**: Its optimization is specific for political contexts.
- **Does Not Perform Fact-Checking**: The model does not verify factual accuracy.
## Training and Evaluation Data
The model was trained on a curated dataset of political debates sourced from:
- **Parliamentary transcripts**
- **Presidential debates and public speeches**
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 2048
- total_eval_batch_size: 512
- optimizer: Use adamw_torch with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 250000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:--------:|:------:|:--------:|:---------------:|
| No log | 0 | 0 | 0.0000 | 10.4141 |
| 1.437 | 12.6967 | 50000 | 0.6960 | 1.3779 |
| 1.2809 | 25.3936 | 100000 | 0.7207 | 1.2324 |
| 1.2188 | 38.0904 | 150000 | 0.7313 | 1.1719 |
| 1.185 | 50.7871 | 200000 | 0.7363 | 1.1426 |
| 1.0261 | 274.6656 | 250000 | 0.9917 | 0.7641 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.0
### Citation
If you use this model, cite us:
```bibtex
@misc{
dore2025roosebertnewdealpolitical,
title={RooseBERT: A New Deal For Political Language Modelling},
author={Deborah Dore and Elena Cabrio and Serena Villata},
year={2025},
eprint={2508.03250},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.03250},
}
```
|
crystalline7/951809
|
crystalline7
| 2025-08-06T09:21:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:21:21Z |
[View on Civ Archive](https://civitaiarchive.com/models/931454?modelVersionId=1042648)
|
mradermacher/Cthulhu-24B-v1.2-GGUF
|
mradermacher
| 2025-08-06T09:18:47Z | 175 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Fentible/Cthulhu-24B-v1.2",
"base_model:quantized:Fentible/Cthulhu-24B-v1.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T23:51:19Z |
---
base_model: Fentible/Cthulhu-24B-v1.2
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Fentible/Cthulhu-24B-v1.2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Cthulhu-24B-v1.2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NYTK/PULI-Trio-Q
|
NYTK
| 2025-08-06T09:14:01Z | 36 | 2 | null |
[
"safetensors",
"qwen2",
"puli",
"hu",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T09:57:51Z |
---
license: apache-2.0
language:
- hu
- en
- zh
tags:
- puli
---
# PULI Trio Q 7B base (7.62B billion parameter)
- Trained with LLaMA-Factory [github](https://github.com/hiyouga/LLaMA-Factory)
- The [Qwen2.5 7B Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) model were continual pretrained on Hungarian dataset
## Dataset for continued pretraining
- Hungarian (8.08 billion words): documents (763K) that exceed 5000 words in length + Hungarian Wikipedia
- English: Long Context QA (2 billion words), BookSum (78 million words)
- Chinese (3 billion Chinese characters): Wudao
- The training was completed using a Hungarian-only dataset:
- 626 million Hungarian words (**1 epoch**): Hungarian Wikipedia + News articles
## Limitations
- max_seq_length = 32 768
## Usage with pipeline
```python
from transformers import pipeline, Qwen2ForCausalLM, AutoTokenizer
model = Qwen2ForCausalLM.from_pretrained("NYTK/PULI-Trio-Q")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-Trio-Q")
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer, device=0)
print(generator(prompt, max_new_tokens=30)[0]["generated_text"])
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-llumix-llama,
title = {PULI Chat: Our First Hungarian Conversational Model},
booktitle = {International Conference on Formal Methods and Foundations of Artificial Intelligence},
year = {2025},
publisher = {Eszterházy Károly Catholic University},
address = {Eger, Hungary},
author = {Yang, Zijian Győző and Bánfi, Ágnes and Dodé, Réka and Ferenczi, Gergő and Földesi, Flóra and Hatvani, Péter and Héja, Enikő and Lengyel, Mariann and Madarász, Gábor and Osváth, Mátyás and Sárossy, Bence and Varga, Kristóf and Váradi, Tamás and Prószéky, Gábor and Ligeti-Nagy, Noémi},
pages = {1--3},
pubstate={accepted abstract},
url ={https://uni-eszterhazy.hu/api/media/file/7f9158bd443acc29dbd2a211971fe8677768257c}
}
```
|
Wienerr/phi-3-mini-LoRA-F16-GGUF
|
Wienerr
| 2025-08-06T09:04:58Z | 7 | 0 |
peft
|
[
"peft",
"gguf",
"trl",
"sft",
"generated_from_trainer",
"code",
"python",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"base_model:edumunozsala/phi-3-mini-LoRA",
"base_model:adapter:edumunozsala/phi-3-mini-LoRA",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-06T09:04:57Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- code
- python
- llama-cpp
- gguf-my-lora
base_model: edumunozsala/phi-3-mini-LoRA
datasets:
- iamtarun/python_code_instructions_18k_alpaca
pipeline_tag: text-generation
model-index:
- name: phi-3-mini-LoRA
results: []
---
# Wienerr/phi-3-mini-LoRA-F16-GGUF
This LoRA adapter was converted to GGUF format from [`edumunozsala/phi-3-mini-LoRA`](https://huggingface.co/edumunozsala/phi-3-mini-LoRA) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/edumunozsala/phi-3-mini-LoRA) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora phi-3-mini-LoRA-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora phi-3-mini-LoRA-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
Dheeraj133/led-biomarker
|
Dheeraj133
| 2025-08-06T09:00:53Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"led",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T09:00:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vmthinh159/Taxi-v3
|
vmthinh159
| 2025-08-06T08:39:51Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T08:35:34Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vmthinh159/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Devique/gemma-3n-finetuned3
|
Devique
| 2025-08-06T08:24:13Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-06T07:34:50Z |
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Devique
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
samsl/rocketshp
|
samsl
| 2025-08-06T08:02:02Z | 0 | 0 | null |
[
"biology",
"license:mit",
"region:us"
] | null | 2025-04-02T08:56:22Z |
---
license: mit
tags:
- biology
---
|
Rhythm/q-FrozenLake-v1-4x4-noSlippery
|
Rhythm
| 2025-08-06T07:55:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T07:55:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Rhythm/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ARG-NCTU/detr-resnet-50-federated-maskedloss-3-clients-3-datasets
|
ARG-NCTU
| 2025-08-06T07:51:48Z | 1,134 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-08-03T17:15:04Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50-federated-maskedloss-3-clients-3-datasets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-federated-maskedloss-3-clients-3-datasets
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.1.0
- Tokenizers 0.21.4
|
Thireus/GLM-4.5-THIREUS-IQ5_K_R4-SPECIAL_SPLIT
|
Thireus
| 2025-08-06T07:50:45Z | 6 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-02T17:40:22Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
Thireus/GLM-4.5-THIREUS-Q4_0_R8-SPECIAL_SPLIT
|
Thireus
| 2025-08-06T07:36:48Z | 9 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-02T12:02:35Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
tiny-random/gpt-oss-mxfp4
|
tiny-random
| 2025-08-06T07:32:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-120b",
"base_model:quantized:openai/gpt-oss-120b",
"autotrain_compatible",
"endpoints_compatible",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-06T07:32:39Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- openai/gpt-oss-120b
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b).
Note: This model used quantized MXFP4 FFN. `pip install -U triton git+https://github.com/triton-lang/triton.git@main#subdirectory=python/triton_kernels`
### Example usage:
- vLLM
```bash
vllm serve tiny-random/gpt-oss-mxfp4
```
- Transformers
```python
import torch
from transformers import pipeline
model_id = "tiny-random/gpt-oss-mxfp4"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype='auto',
device_map="cuda",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=16,
)
print(outputs[0]["generated_text"][-1])
```
### Codes to create this repo:
```python
import json
import safetensors
import torch
from huggingface_hub import hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
GenerationConfig,
GptOssForCausalLM,
pipeline,
set_seed,
)
source_model_id = "openai/gpt-oss-120b"
save_folder = "/tmp/tiny-random/gpt-oss-mxfp4"
processor = AutoProcessor.from_pretrained(source_model_id)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r') as f:
config_json = json.load(f)
config_json.update({
"head_dim": 32,
"hidden_size": 32, # required by Mxfp4GptOssExperts codes
"intermediate_size": 64,
"layer_types": ["sliding_attention", "full_attention"],
"num_attention_heads": 2,
"num_hidden_layers": 2,
"num_key_value_heads": 1,
"num_local_experts": 32,
"tie_word_embeddings": True,
})
quantization_config = config_json['quantization_config']
del config_json['quantization_config']
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(save_folder)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, torch_dtype=torch.bfloat16)
torch.set_default_dtype(torch.float32)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape)
model.save_pretrained(save_folder)
# mxfp4
state_dict = model.cpu().state_dict()
del state_dict['lm_head.weight']
for i in range(len(model.model.layers)):
del state_dict[f'model.layers.{i}.mlp.experts.down_proj']
del state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj']
state_dict[f'model.layers.{i}.mlp.experts.down_proj_blocks'] = torch.randint(0, 255, size=(
config.num_local_experts, config.hidden_size, config.intermediate_size // 32, 16), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.down_proj_scales'] = torch.randint(0, 4, size=(
config.num_local_experts, config.hidden_size, config.intermediate_size // 32), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj_blocks'] = torch.randint(0, 255, size=(
config.num_local_experts, 2 * config.intermediate_size, config.hidden_size // 32, 16), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj_scales'] = torch.randint(0, 4, size=(
config.num_local_experts, 2 * config.intermediate_size, config.hidden_size // 32), dtype=torch.uint8
)
safetensors.torch.save_file(state_dict, f"{save_folder}/model.safetensors")
# from unittest.mock import Mock
# from transformers.quantizers.auto import AutoHfQuantizer
# from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer
# _get_device_capability = torch.cuda.get_device_capability
# torch.cuda.get_device_capability = Mock(return_value=(9, 0))
# set_seed(42)
# bf16_state_dict = model.cpu().state_dict()
# model = AutoModelForCausalLM.from_pretrained(save_folder, torch_dtype=torch.bfloat16, quantization_config=quantization_config)
# for i in range(len(model.model.layers)):
# model.model.layers[i].mlp.experts.down_proj_bottom_pad = 0
# model.model.layers[i].mlp.experts.down_proj_right_pad = 0
# hf_quantizer: Mxfp4HfQuantizer = AutoHfQuantizer.from_config(quantization_config)
# hf_quantizer.pre_quantized = False
# ffn_keys = ['model.layers.0.mlp.experts.down_proj', 'model.layers.0.mlp.experts.gate_up_proj',
# 'model.layers.1.mlp.experts.down_proj', 'model.layers.1.mlp.experts.gate_up_proj']
# for key in ffn_keys:
# hf_quantizer.create_quantized_param(model, bf16_state_dict[key], key, "cuda", bf16_state_dict)
# print('down_proj', model.model.layers[0].mlp.experts.down_proj)
# print('down_proj_blocks', model.model.layers[0].mlp.experts.down_proj_blocks)
# state_dict = model.state_dict()
# del state_dict['lm_head.weight']
# for key in ffn_keys:
# del state_dict[key]
# for k, v in state_dict.items():
# if str(v.device) == 'meta':
# print(k, v.device, v.shape)
# safetensors.torch.save_file(state_dict, f"{save_folder}/model.safetensors")
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config = json.load(f)
config['quantization_config'] = quantization_config
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config, f, indent=2)
# torch.cuda.get_device_capability = _get_device_capability
```
|
crystalline7/1276846
|
crystalline7
| 2025-08-06T07:12:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T07:12:41Z |
[View on Civ Archive](https://civitaiarchive.com/models/382886?modelVersionId=1373921)
|
HueyAnthonyDisward/t5-small-finetuned-wikisql
|
HueyAnthonyDisward
| 2025-08-06T07:06:00Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T04:19:53Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-wikisql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0450
- Model Preparation Time: 0.003
- Rouge2 Fmeasure: 0.802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------------:|:---------------:|
| 0.099 | 1.0 | 4420 | 0.0659 | 0.003 | 0.7932 |
| 0.0779 | 2.0 | 8840 | 0.0533 | 0.003 | 0.7976 |
| 0.0675 | 3.0 | 13260 | 0.0482 | 0.003 | 0.7998 |
| 0.0599 | 4.0 | 17680 | 0.0459 | 0.003 | 0.8013 |
| 0.062 | 5.0 | 22100 | 0.0450 | 0.003 | 0.802 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
arianaazarbal/underspecified_hacker_8_5_tests_5
|
arianaazarbal
| 2025-08-06T07:03:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T07:02:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
slickrick67/Qwen3-0.6B-Gensyn-Swarm-sizable_thorny_barracuda
|
slickrick67
| 2025-08-06T07:02:46Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am sizable_thorny_barracuda",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-28T00:42:37Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am sizable_thorny_barracuda
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crystalline7/1366322
|
crystalline7
| 2025-08-06T06:57:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:57:28Z |
[View on Civ Archive](https://civitaiarchive.com/models/1286235?modelVersionId=1465150)
|
crystalline7/1432896
|
crystalline7
| 2025-08-06T06:56:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:56:01Z |
[View on Civ Archive](https://civitaiarchive.com/models/1349693?modelVersionId=1532942)
|
crystalline7/1163097
|
crystalline7
| 2025-08-06T06:55:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:55:12Z |
[View on Civ Archive](https://civitaiarchive.com/models/1119281?modelVersionId=1257929)
|
yujiepan/gpt-oss-tiny-random-bf16
|
yujiepan
| 2025-08-06T06:53:18Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T05:16:31Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- openai/gpt-oss-120b
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b).
Note: This model is in BF16; quantized MXFP4 FFN is not used.
### Example usage:
- vLLM
```bash
vllm serve yujiepan/gpt-oss-tiny-random-bf16
```
- Transformers
```python
import torch
from transformers import pipeline
model_id = "yujiepan/gpt-oss-tiny-random-bf16"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="cuda"
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=16,
)
print(outputs[0]["generated_text"][-1])
```
### Codes to create this repo:
```python
import json
import torch
from huggingface_hub import hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
GenerationConfig,
GptOssForCausalLM,
pipeline,
set_seed,
)
source_model_id = "openai/gpt-oss-120b"
save_folder = "/tmp/yujiepan/gpt-oss-tiny-random-bf16"
processor = AutoProcessor.from_pretrained(source_model_id)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r') as f:
config_json = json.load(f)
config_json.update({
"head_dim": 32,
"hidden_size": 32, # required by Mxfp4GptOssExperts codes
"intermediate_size": 64,
"layer_types": ["sliding_attention", "full_attention"],
"num_attention_heads": 2,
"num_hidden_layers": 2,
"num_key_value_heads": 1,
"num_local_experts": 32,
"tie_word_embeddings": True,
})
quantization_config = config_json['quantization_config']
del config_json['quantization_config']
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(save_folder)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config)
torch.set_default_dtype(torch.float32)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape)
model.save_pretrained(save_folder)
# mxfp4
from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer
# model = AutoModelForCausalLM.from_pretrained(save_folder, trust_remote_code=True, torch_dtype=torch.bfloat16, quantization_config=quantization_config)
# model.save_pretrained(save_folder, safe_serialization=True)
```
|
crystalline7/1404061
|
crystalline7
| 2025-08-06T06:52:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:52:47Z |
[View on Civ Archive](https://civitaiarchive.com/models/1332107?modelVersionId=1504060)
|
minimimtoy25/fernando1
|
minimimtoy25
| 2025-08-06T06:52:07Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-06T04:09:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
minimimtoy25/sorocaba1
|
minimimtoy25
| 2025-08-06T06:51:48Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-06T04:12:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
crystalline7/1498936
|
crystalline7
| 2025-08-06T06:51:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:51:02Z |
[View on Civ Archive](https://civitaiarchive.com/models/1414614?modelVersionId=1598852)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.