modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
crystalline7/304651
|
crystalline7
| 2025-08-06T09:29:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:29:52Z |
[View on Civ Archive](https://civitaiarchive.com/models/339269?modelVersionId=379878)
|
crystalline7/1085489
|
crystalline7
| 2025-08-06T09:29:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:29:37Z |
[View on Civ Archive](https://civitaiarchive.com/models/1051817?modelVersionId=1180235)
|
crystalline7/842831
|
crystalline7
| 2025-08-06T09:29:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:29:07Z |
[View on Civ Archive](https://civitaiarchive.com/models/836144?modelVersionId=935492)
|
crystalline7/1315844
|
crystalline7
| 2025-08-06T09:27:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:27:37Z |
[View on Civ Archive](https://civitaiarchive.com/models/397313?modelVersionId=1413992)
|
crystalline7/907234
|
crystalline7
| 2025-08-06T09:26:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:26:24Z |
[View on Civ Archive](https://civitaiarchive.com/models/862261?modelVersionId=964753)
|
Jacksss123/net72_uid191
|
Jacksss123
| 2025-08-06T09:26:07Z | 1 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-06T09:17:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crystalline7/1074877
|
crystalline7
| 2025-08-06T09:26:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:26:04Z |
[View on Civ Archive](https://civitaiarchive.com/models/1042343?modelVersionId=1169514)
|
crystalline7/863961
|
crystalline7
| 2025-08-06T09:26:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:25:58Z |
[View on Civ Archive](https://civitaiarchive.com/models/855526?modelVersionId=957222)
|
crystalline7/1133348
|
crystalline7
| 2025-08-06T09:25:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:25:15Z |
[View on Civ Archive](https://civitaiarchive.com/models/1093251?modelVersionId=1227895)
|
ddore14/RooseBERT-cont-cased
|
ddore14
| 2025-08-06T09:24:54Z | 3 | 0 | null |
[
"safetensors",
"bert",
"nlp",
"political debates",
"en",
"arxiv:2508.03250",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-07-07T15:19:06Z |
---
license: cc-by-nc-sa-4.0
language:
- en
metrics:
- perplexity
base_model:
- google-bert/bert-base-cased
tags:
- nlp
- political debates
---
# RooseBERT-cont-cased
This model is a fine-tuned version of bert-base-cased.
It achieves the following results on the evaluation set:
- Loss: 0.8975
- Accuracy: 0.7830
- Perplexity 2.580
## Model description
This model builds on the same architecture as `bert-base-cased`, leveraging transformer-based contextual embeddings to better understand the nuances of political language.
## Intended Use Cases
### Suitable Applications
- **Political discourse analysis**: Identifying patterns, sentiments, and rhetoric in debates.
- **Contextual word interpretation**: Understanding the meaning of words within political contexts.
- **Sentiment classification**: Differentiating positive, neutral, and negative sentiments in political speech.
- **Text generation improvement**: Enhancing auto-completions and summaries in politically focused language models.
### Limitations
- **Bias Sensitivity**: Since it was trained on political debates, inherent biases in the data may be reflected in the model’s outputs.
- **Not Suitable for General-Purpose NLP**: Its optimization is specific for political contexts.
- **Does Not Perform Fact-Checking**: The model does not verify factual accuracy.
## Training and Evaluation Data
The model was trained on a curated dataset of political debates sourced from:
- **Parliamentary transcripts**
- **Presidential debates and public speeches**
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 2048
- total_eval_batch_size: 512
- optimizer: Use adamw_torch with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 150000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:--------:|:------:|:--------:|:---------------:|
| No log | 0 | 0 | 0.0000 | 10.4062 |
| 1.1477 | 12.6967 | 50000 | 0.7461 | 1.0957 |
| 1.0856 | 25.3936 | 100000 | 0.7560 | 1.0410 |
| 0.9345 | 164.7996 | 150000 | 0.8975 | 0.7830 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.0
### Citation
If you use this model, cite us:
```bibtex
@misc{
dore2025roosebertnewdealpolitical,
title={RooseBERT: A New Deal For Political Language Modelling},
author={Deborah Dore and Elena Cabrio and Serena Villata},
year={2025},
eprint={2508.03250},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.03250},
}
```
|
crystalline7/1280400
|
crystalline7
| 2025-08-06T09:23:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:23:04Z |
[View on Civ Archive](https://civitaiarchive.com/models/1208263?modelVersionId=1377488)
|
crystalline7/1162351
|
crystalline7
| 2025-08-06T09:22:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:22:54Z |
[View on Civ Archive](https://civitaiarchive.com/models/1118590?modelVersionId=1257142)
|
crystalline7/848185
|
crystalline7
| 2025-08-06T09:21:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:21:40Z |
[View on Civ Archive](https://civitaiarchive.com/models/840927?modelVersionId=940820)
|
crystalline7/951809
|
crystalline7
| 2025-08-06T09:21:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:21:21Z |
[View on Civ Archive](https://civitaiarchive.com/models/931454?modelVersionId=1042648)
|
crystalline7/888729
|
crystalline7
| 2025-08-06T09:21:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:21:15Z |
[View on Civ Archive](https://civitaiarchive.com/models/877421?modelVersionId=982211)
|
crystalline7/806054
|
crystalline7
| 2025-08-06T09:20:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:20:37Z |
[View on Civ Archive](https://civitaiarchive.com/models/802768?modelVersionId=897608)
|
crystalline7/346145
|
crystalline7
| 2025-08-06T09:20:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:20:16Z |
[View on Civ Archive](https://civitaiarchive.com/models/379878?modelVersionId=424118)
|
crystalline7/1358710
|
crystalline7
| 2025-08-06T09:19:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:19:43Z |
[View on Civ Archive](https://civitaiarchive.com/models/379878?modelVersionId=1457519)
|
crystalline7/1170357
|
crystalline7
| 2025-08-06T09:19:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T09:19:26Z |
[View on Civ Archive](https://civitaiarchive.com/models/1125749?modelVersionId=1265337)
|
mradermacher/Cthulhu-24B-v1.2-GGUF
|
mradermacher
| 2025-08-06T09:18:47Z | 175 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Fentible/Cthulhu-24B-v1.2",
"base_model:quantized:Fentible/Cthulhu-24B-v1.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T23:51:19Z |
---
base_model: Fentible/Cthulhu-24B-v1.2
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Fentible/Cthulhu-24B-v1.2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Cthulhu-24B-v1.2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Yuichi1218/Llama-3.1-Lafeak-8B-chatvecor-NonPeriod
|
Yuichi1218
| 2025-08-06T09:17:13Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T09:13:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekiprop/SST-2-GLoRA-p20-seed10
|
ekiprop
| 2025-08-06T09:14:16Z | 54 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-06T09:02:16Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-GLoRA-p20-seed10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-GLoRA-p20-seed10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1884
- Accuracy: 0.9484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.4199 | 0.0950 | 200 | 0.2210 | 0.9174 |
| 0.3044 | 0.1900 | 400 | 0.2208 | 0.9220 |
| 0.2977 | 0.2850 | 600 | 0.2075 | 0.9209 |
| 0.257 | 0.3800 | 800 | 0.2154 | 0.9312 |
| 0.2594 | 0.4751 | 1000 | 0.2336 | 0.9243 |
| 0.2552 | 0.5701 | 1200 | 0.2057 | 0.9255 |
| 0.2511 | 0.6651 | 1400 | 0.2051 | 0.9209 |
| 0.2469 | 0.7601 | 1600 | 0.2049 | 0.9289 |
| 0.2434 | 0.8551 | 1800 | 0.1923 | 0.9358 |
| 0.2328 | 0.9501 | 2000 | 0.2095 | 0.9312 |
| 0.2445 | 1.0451 | 2200 | 0.1810 | 0.9369 |
| 0.2221 | 1.1401 | 2400 | 0.2009 | 0.9312 |
| 0.2305 | 1.2352 | 2600 | 0.2342 | 0.9209 |
| 0.2285 | 1.3302 | 2800 | 0.2102 | 0.9323 |
| 0.2212 | 1.4252 | 3000 | 0.2039 | 0.9358 |
| 0.2194 | 1.5202 | 3200 | 0.1893 | 0.9323 |
| 0.2236 | 1.6152 | 3400 | 0.1850 | 0.9392 |
| 0.2081 | 1.7102 | 3600 | 0.2182 | 0.9312 |
| 0.2095 | 1.8052 | 3800 | 0.2164 | 0.9381 |
| 0.2258 | 1.9002 | 4000 | 0.1871 | 0.9392 |
| 0.2104 | 1.9952 | 4200 | 0.2022 | 0.9404 |
| 0.2065 | 2.0903 | 4400 | 0.1696 | 0.9438 |
| 0.2027 | 2.1853 | 4600 | 0.1860 | 0.9381 |
| 0.2015 | 2.2803 | 4800 | 0.1884 | 0.9427 |
| 0.1994 | 2.3753 | 5000 | 0.1831 | 0.9438 |
| 0.2007 | 2.4703 | 5200 | 0.1890 | 0.9438 |
| 0.2023 | 2.5653 | 5400 | 0.1992 | 0.9312 |
| 0.2057 | 2.6603 | 5600 | 0.1720 | 0.9427 |
| 0.199 | 2.7553 | 5800 | 0.1685 | 0.9427 |
| 0.189 | 2.8504 | 6000 | 0.1860 | 0.9438 |
| 0.1905 | 2.9454 | 6200 | 0.1730 | 0.9438 |
| 0.1872 | 3.0404 | 6400 | 0.1864 | 0.9427 |
| 0.1794 | 3.1354 | 6600 | 0.1804 | 0.9427 |
| 0.1903 | 3.2304 | 6800 | 0.1870 | 0.9404 |
| 0.1928 | 3.3254 | 7000 | 0.1880 | 0.9427 |
| 0.1803 | 3.4204 | 7200 | 0.1884 | 0.9484 |
| 0.1785 | 3.5154 | 7400 | 0.2194 | 0.9427 |
| 0.1791 | 3.6105 | 7600 | 0.2013 | 0.9381 |
| 0.1835 | 3.7055 | 7800 | 0.1802 | 0.9472 |
| 0.1871 | 3.8005 | 8000 | 0.1918 | 0.9450 |
| 0.1802 | 3.8955 | 8200 | 0.1863 | 0.9450 |
| 0.1746 | 3.9905 | 8400 | 0.1866 | 0.9484 |
| 0.1745 | 4.0855 | 8600 | 0.1967 | 0.9461 |
| 0.1724 | 4.1805 | 8800 | 0.2065 | 0.9427 |
| 0.1743 | 4.2755 | 9000 | 0.2031 | 0.9392 |
| 0.1695 | 4.3705 | 9200 | 0.1942 | 0.9427 |
| 0.1649 | 4.4656 | 9400 | 0.2025 | 0.9427 |
| 0.1642 | 4.5606 | 9600 | 0.2043 | 0.9438 |
| 0.1745 | 4.6556 | 9800 | 0.1922 | 0.9427 |
| 0.1672 | 4.7506 | 10000 | 0.1891 | 0.9461 |
| 0.1813 | 4.8456 | 10200 | 0.1840 | 0.9438 |
| 0.1763 | 4.9406 | 10400 | 0.1854 | 0.9438 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mithun932001/lora_model
|
mithun932001
| 2025-08-06T09:13:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T09:12:06Z |
---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mithun932001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xuminglong/kontext-tryon
|
xuminglong
| 2025-08-06T09:01:05Z | 923 | 19 |
diffusers
|
[
"diffusers",
"lora",
"flux",
"image-to-image",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:mit",
"region:us"
] |
image-to-image
| 2025-07-04T10:32:17Z |
---
license: mit
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
tags:
- lora
- diffusers
- flux
---
kontext context 换装提示词增强lora
添加以下提示词:
Dress the figure in the right image in the clothes from the left image.
就可以轻松实现让右边图像人物穿上左边图像的衣服。
基于以下工作流使用,普通工作流效果没有那么明显,和数据的前处理相关,以下工作流和训练方式进行对齐。
https://www.runninghub.cn/post/1939762212267593730/?inviteCode=kol01-rh0102
---
mytryon4 lora 极大提高换装成功率,极大保持原有人物身材姿势,极大还原衣服细节。
mytryon4 lora 配套的工作流:
https://www.runninghub.cn/post/1944965304159232001/?inviteCode=kol01-rh0102
可本地部署
|
Dheeraj133/led-biomarker
|
Dheeraj133
| 2025-08-06T09:00:53Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"led",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T09:00:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekiprop/SST-2-GLoRA-p10-seed10
|
ekiprop
| 2025-08-06T08:59:59Z | 107 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-06T08:35:20Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-GLoRA-p10-seed10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-GLoRA-p10-seed10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1852
- Accuracy: 0.9369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.4677 | 0.0950 | 200 | 0.2799 | 0.8911 |
| 0.359 | 0.1900 | 400 | 0.2460 | 0.8991 |
| 0.3366 | 0.2850 | 600 | 0.2384 | 0.9106 |
| 0.323 | 0.3800 | 800 | 0.2328 | 0.9128 |
| 0.3113 | 0.4751 | 1000 | 0.2767 | 0.9014 |
| 0.3041 | 0.5701 | 1200 | 0.2394 | 0.9128 |
| 0.2906 | 0.6651 | 1400 | 0.2178 | 0.9174 |
| 0.2962 | 0.7601 | 1600 | 0.2181 | 0.9186 |
| 0.2934 | 0.8551 | 1800 | 0.2219 | 0.9232 |
| 0.2826 | 0.9501 | 2000 | 0.2209 | 0.9197 |
| 0.3031 | 1.0451 | 2200 | 0.2063 | 0.9197 |
| 0.275 | 1.1401 | 2400 | 0.2060 | 0.9278 |
| 0.2817 | 1.2352 | 2600 | 0.2220 | 0.9117 |
| 0.2774 | 1.3302 | 2800 | 0.2124 | 0.9278 |
| 0.2666 | 1.4252 | 3000 | 0.2116 | 0.9232 |
| 0.2716 | 1.5202 | 3200 | 0.2089 | 0.9220 |
| 0.2748 | 1.6152 | 3400 | 0.2132 | 0.9186 |
| 0.2608 | 1.7102 | 3600 | 0.2157 | 0.9140 |
| 0.2685 | 1.8052 | 3800 | 0.2185 | 0.9278 |
| 0.2753 | 1.9002 | 4000 | 0.1957 | 0.9278 |
| 0.2699 | 1.9952 | 4200 | 0.2148 | 0.9232 |
| 0.2676 | 2.0903 | 4400 | 0.1885 | 0.9300 |
| 0.266 | 2.1853 | 4600 | 0.1922 | 0.9278 |
| 0.2588 | 2.2803 | 4800 | 0.2112 | 0.9289 |
| 0.2575 | 2.3753 | 5000 | 0.1868 | 0.9323 |
| 0.2575 | 2.4703 | 5200 | 0.2100 | 0.9220 |
| 0.2525 | 2.5653 | 5400 | 0.2013 | 0.9289 |
| 0.2602 | 2.6603 | 5600 | 0.1924 | 0.9278 |
| 0.2676 | 2.7553 | 5800 | 0.2016 | 0.9243 |
| 0.2466 | 2.8504 | 6000 | 0.2017 | 0.9300 |
| 0.2516 | 2.9454 | 6200 | 0.1944 | 0.9300 |
| 0.2542 | 3.0404 | 6400 | 0.2079 | 0.9232 |
| 0.2429 | 3.1354 | 6600 | 0.1951 | 0.9243 |
| 0.2446 | 3.2304 | 6800 | 0.1925 | 0.9243 |
| 0.2576 | 3.3254 | 7000 | 0.1901 | 0.9300 |
| 0.2376 | 3.4204 | 7200 | 0.1968 | 0.9346 |
| 0.2434 | 3.5154 | 7400 | 0.2084 | 0.9289 |
| 0.2468 | 3.6105 | 7600 | 0.2006 | 0.9323 |
| 0.2491 | 3.7055 | 7800 | 0.1852 | 0.9369 |
| 0.2453 | 3.8005 | 8000 | 0.1931 | 0.9335 |
| 0.2454 | 3.8955 | 8200 | 0.2028 | 0.9335 |
| 0.2562 | 3.9905 | 8400 | 0.1896 | 0.9300 |
| 0.2385 | 4.0855 | 8600 | 0.2001 | 0.9323 |
| 0.2393 | 4.1805 | 8800 | 0.2002 | 0.9312 |
| 0.2443 | 4.2755 | 9000 | 0.2043 | 0.9312 |
| 0.2313 | 4.3705 | 9200 | 0.2018 | 0.9300 |
| 0.2306 | 4.4656 | 9400 | 0.2046 | 0.9312 |
| 0.2224 | 4.5606 | 9600 | 0.2005 | 0.9346 |
| 0.2325 | 4.6556 | 9800 | 0.1969 | 0.9312 |
| 0.2424 | 4.7506 | 10000 | 0.1972 | 0.9312 |
| 0.2432 | 4.8456 | 10200 | 0.1938 | 0.9335 |
| 0.2485 | 4.9406 | 10400 | 0.1932 | 0.9335 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
looppayments/dux_v0_mini
|
looppayments
| 2025-08-06T08:59:42Z | 37 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-05T16:57:28Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
a1o/pick-blue-cube_sarm_policy
|
a1o
| 2025-08-06T08:56:43Z | 9 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:a1o/pick-blue-cube_sarm",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T08:56:36Z |
---
datasets: a1o/pick-blue-cube_sarm
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
townwish/tiny-OPTForCausalLM-lora
|
townwish
| 2025-08-06T08:40:39Z | 17 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2025-08-06T08:27:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
huynguyendbs/gpt-oss-20b-mlx
|
huynguyendbs
| 2025-08-06T08:39:41Z | 422 | 1 |
mlx
|
[
"mlx",
"safetensors",
"gpt_oss",
"vllm",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-06T08:39:24Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: mlx
tags:
- vllm
- mlx
base_model: openai/gpt-oss-20b
---
|
voicing-ai/stt_multilingual_fastconformer_euro4
|
voicing-ai
| 2025-08-06T08:38:52Z | 9 | 0 |
nemo
|
[
"nemo",
"region:us"
] | null | 2025-08-06T08:25:49Z |
This model originates supports English, French, Spanish and German Languages.
|
FrontierInstruments/merged_softstart_reasoning_r32a64_2epochs
|
FrontierInstruments
| 2025-08-06T08:35:40Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T08:32:37Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** FrontierInstruments
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oguzkyilmaz/Asteria-Bungalov-Gemma-9B
|
oguzkyilmaz
| 2025-08-06T08:32:34Z | 0 | 0 | null |
[
"safetensors",
"bungalow",
"tourism",
"text-generation",
"conversational",
"tr",
"base_model:ytu-ce-cosmos/Turkish-Gemma-9b-v0.1",
"base_model:finetune:ytu-ce-cosmos/Turkish-Gemma-9b-v0.1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-04T18:16:20Z |
---
license: apache-2.0
language:
- tr
base_model:
- ytu-ce-cosmos/Turkish-Gemma-9b-v0.1
pipeline_tag: text-generation
tags:
- bungalow
- tourism
---
Model Details:
github.com/oguzkyilmaz/Asteria-Bungalov-Chatbot/
- **Developed by:** [oguzkyilmaz]
Modelin Kullanımı
İnce ayar yapılmış modele (adapter) Hugging Face üzerinden erişebilir ve aşağıdaki Python kodu ile kolayca kullanmaya başlayabilirsiniz.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model_name = "ytu-ce-cosmos/Turkish-Gemma-9b-v0.1"
adapter_model_name = "oguzkyilmaz/Asteria-Bungalov-Gemma-9B"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
base_model_name,
quantization_config=bnb_config,
device_map="auto"
)
model = PeftModel.from_pretrained(model, adapter_model_name)
prompt = "Merhaba, Asteria Bungalov hakkında bilgi verebilir misin?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- PEFT 0.15.2
|
ekiprop/SST-2-GLoRA-p10-seed54
|
ekiprop
| 2025-08-06T08:30:36Z | 109 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-05T13:22:10Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-GLoRA-p10-seed54
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-GLoRA-p10-seed54
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2053
- Accuracy: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.458 | 0.0950 | 200 | 0.2671 | 0.8842 |
| 0.3406 | 0.1900 | 400 | 0.2413 | 0.8933 |
| 0.3191 | 0.2850 | 600 | 0.2482 | 0.9060 |
| 0.3107 | 0.3800 | 800 | 0.2397 | 0.9037 |
| 0.2896 | 0.4751 | 1000 | 0.2199 | 0.9197 |
| 0.2836 | 0.5701 | 1200 | 0.2091 | 0.9220 |
| 0.2816 | 0.6651 | 1400 | 0.2097 | 0.9209 |
| 0.2828 | 0.7601 | 1600 | 0.2068 | 0.9220 |
| 0.2841 | 0.8551 | 1800 | 0.2040 | 0.9197 |
| 0.2735 | 0.9501 | 2000 | 0.2053 | 0.9255 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
hedronstone/gpt-oss-20b-medical-QA
|
hedronstone
| 2025-08-06T08:28:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"medical",
"en",
"dataset:hedronstone/PubMedQA-Reasoning-Traces",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T07:56:20Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
- medical
licence: license
license: apache-2.0
datasets:
- hedronstone/PubMedQA-Reasoning-Traces
language:
- en
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
It has been trained on 15,000 reasoning traces generated from [PubMedQA](https://huggingface.co/datasets/qiaojin/PubMedQA)
## Quick start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
from accelerate import Accelerator
# Initialize accelerator for multi-GPU support
accelerator = Accelerator()
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("openai/gpt-oss-20b")
# Set pad token to avoid attention mask warning
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Load the original model
model_kwargs = {
"attn_implementation": "eager",
"torch_dtype": "auto",
"use_cache": True,
"device_map": "auto" # Let accelerate handle device placement
}
base_model = AutoModelForCausalLM.from_pretrained("openai/gpt-oss-20b", **model_kwargs)
# Merge fine-tuned weights with the base model
peft_model_id = hedronstone/gpt-oss-20b-medical-QA"
model = PeftModel.from_pretrained(base_model, peft_model_id)
model = model.merge_and_unload()
# Prepare model with accelerate
model, tokenizer = accelerator.prepare(model, tokenizer)
# Define system prompt and question
SYSTEM_PROMPT = '''You are an expert clinician that provides accurate and concise responses to medical questions.'''
question = "Are group 2 innate lymphoid cells (ILC2s) increased in chronic rhinosinusitis with nasal polyps or eosinophilia?"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": question},
]
# Tokenize input
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to(accelerator.device) # Use accelerator.device instead of model.device
# Generation kwargs
gen_kwargs = {
"max_new_tokens": 512,
"do_sample": True,
"temperature": 0.6,
"top_p": None,
"top_k": None
}
# Generate response
with accelerator.autocast():
output_ids = model.generate(input_ids, **gen_kwargs)
# Decode response
response = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print(response)
```
## Sample Output
```text
To determine if group 2 innate lymphoid cells (ILC2s) are increased in chronic rhinosinusitis with nasal polyps or eosinophilia, we can analyze the provided context step by step:
1. **Understanding ILC2s**: Group 2 innate lymphoid cells are known to be involved in allergic inflammation and are thought to play a role in conditions like chronic rhinosinusitis with nasal polyps.
2. **Study Design**: The study involved collecting samples from 52 patients with chronic rhinosinusitis and 16 controls. This allows for a comparative analysis of ILC2 levels between affected individuals and healthy controls.
3. **Results on ILC2 Levels**: The findings indicate that ILC2s were significantly increased in both blood and sinus tissues of patients with chronic rhinosinusitis. Specifically, the percentages of ILC2s in the blood were 0.45% for patients with nasal polyps and 0.34% for those without, compared to only 0.06% in controls. This suggests a clear increase in ILC2 levels in patients.
4. **Correlation with Eosinophilia**: The study also found that ILC2 levels were positively correlated with eosinophilia in both blood and sinus tissues. This means that as ILC2 levels increase, eosinophil levels also tend to increase, indicating a potential link between ILC2s and eosinophilic inflammation in these patients.
5. **Conclusion**: Overall, the evidence supports that group 2 innate lymphoid cells are indeed increased in chronic rhinosinusitis with nasal polyps and are associated with eosinophilia. This suggests that ILC2s may play a significant role in the pathophysiology of this condition.assistantfinalILC2 levels are increased in the blood and sinus tissues of patients with chronic rhinosinusitis. ILC2 levels are positively correlated with eosinophilia, suggesting an important role for ILC2s in the pathogenesis of chronic rhinosinusitis.
Final Decision: yes
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sh-science-team/huggingface/runs/jsigp4sv)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ggufff/how-to-run-gpt-oss
|
ggufff
| 2025-08-06T08:26:16Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-08-06T08:24:31Z |
---
license: mit
---
# 🧠 How to Run GPT-OSS Models Locally
## 🔍 What is GPT-OSS?
**GPT-OSS** refers to **open-source GPT-style models** that can be run locally on your own machine — no internet, no API keys, and full control.
Popular open-source GPT-OSS models:
| Model | Params | License | Type |
|-------|--------|---------|------|
| `gpt-oss-120b` | 120B | Apache 2.0 | Dense |
| `gpt-oss-20b` | 20B | Apache 2.0 | Dense |
| `gpt-oss-117B-moe` | 117B | Apache 2.0 | Mixture-of-Experts (MoE) |
| `gpt-oss-21B-moe` | 21B | Apache 2.0 | MoE |
These models excel in:
- High reasoning capability
- Tool use (plugins/functions)
- Multi-turn chat
- Local + Enterprise deployments
---
## 🚀 Running GPT-OSS Locally (Step-by-Step)
### 🛠 Requirements
- A modern CPU or GPU (16+ GB RAM or VRAM preferred)
- Python 3.10+
- One of these backends:
- `llama-cpp-python`
- `ctransformers`
- `llamafile`
- `gguf-loader` (**recommended**)
### 🧩 Step 1: Download a GPT-OSS Model (GGUF format)
Pick a GGUF-quantized version of the model from Hugging Face:
#### 🔗 Model Download Links:
- **GPT-OSS 120B**: https://huggingface.co/gpt-oss/gpt-oss-120b-GGUF
- **GPT-OSS 20B**: https://huggingface.co/gpt-oss/gpt-oss-20b-GGUF
- **GPT-OSS 117B-MoE**: https://huggingface.co/gpt-oss/gpt-oss-117b-moe-GGUF
- **GPT-OSS 21B-MoE**: https://huggingface.co/gpt-oss/gpt-oss-21b-moe-GGUF
> These models are available in 2-bit, 3-bit, 4-bit, and 8-bit quantizations for lower memory usage.
---
## 💡 Why Use GGUF Format?
**GGUF (GPT-GRAMMAR UNIFIED FORMAT)** is:
- Optimized for local inference
- Compatible with multiple backends
- Easy to compress & distribute
- Fast to load, even on laptops
---
## ✅ Best Way to Run These Models: **GGUF Loader**
### 🔧 What is GGUF Loader?
**GGUF Loader** is a fast, user-friendly app that lets you:
- Load any GGUF model in one click
- Detect and use your CPU/GPU automatically
- Chat, summarize, or generate with models locally
- Run offline forever — no cloud, no limits

---
## 📦 Install GGUF Loader
### 💻 Windows / Linux / macOS:
1. Download the latest release from GitHub:
🔗 [https://github.com/GGUFloader/gguf-loader/releases](https://github.com/GGUFloader/gguf-loader/releases)
2. Run the installer or binary.
3. Drop your `.gguf` models into the **models/** folder.
4. Launch the app and start chatting!
---
## 🌟 Why Choose GGUF Loader?
| Feature | GGUF Loader | Others |
|--------|--------------|--------|
| Auto GPU/CPU Detection | ✅ | ❌ |
| Offline Mode | ✅ | 🔶 |
| Supports Large Models | ✅ | 🔶 |
| No Setup Required | ✅ | ❌ |
| Addon System (Extendable UI) | ✅ | ❌ |
| Lightweight UI | ✅ | ❌ |
> GGUF Loader is open-source and optimized for **speed, portability, and extensibility**.
---
## 🔗 Resources
- 🌐 GGUF Loader Website: https://ggufloader.github.io
- 🧠 Learn about GGUF: https://huggingface.co/docs/transformers/main/en/model_doc/gguf
- 💻 Hugging Face GPT-OSS Models: https://huggingface.co/gpt-oss
---
## 🙌 Final Thoughts
Running GPT-OSS locally gives you **freedom**, **privacy**, and **performance**.
If you’re serious about local LLMs, give **GGUF Loader** a try — it's the easiest way to get started.
> **One model, your machine, endless possibilities.**
|
darshandugar/MailClassifier-DistilBERT
|
darshandugar
| 2025-08-06T08:25:13Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"MailClassification",
"Corporate",
"LoRA",
"token-classification",
"en",
"dataset:infinite-dataset-hub/CorporateMailCategorization",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-06T08:08:15Z |
---
license: mit
datasets:
- infinite-dataset-hub/CorporateMailCategorization
language:
- en
metrics:
- accuracy
- f1
- recall
- precision
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: token-classification
library_name: transformers
tags:
- MailClassification
- Corporate
- LoRA
---
|
longhoang2112/whisper-tiny-fine-tuning
|
longhoang2112
| 2025-08-06T08:22:00Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2025-08-05T20:22:39Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
NDugar/flan-t5-small-Q8_0-GGUF
|
NDugar
| 2025-08-06T08:21:47Z | 9 | 0 | null |
[
"gguf",
"text2text-generation",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"base_model:google/flan-t5-small",
"base_model:quantized:google/flan-t5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T08:21:45Z |
---
language:
- en
- fr
- ro
- de
- multilingual
tags:
- text2text-generation
- llama-cpp
- gguf-my-repo
widget:
- text: 'Translate to German: My name is Arthur'
example_title: Translation
- text: Please answer to the following question. Who is going to be the next Ballon
d'or?
example_title: Question Answering
- text: 'Q: Can Geoffrey Hinton have a conversation with George Washington? Give the
rationale before answering.'
example_title: Logical reasoning
- text: Please answer the following question. What is the boiling point of Nitrogen?
example_title: Scientific knowledge
- text: Answer the following yes/no question. Can you write a whole Haiku in a single
tweet?
example_title: Yes/no question
- text: Answer the following yes/no question by reasoning step-by-step. Can you write
a whole Haiku in a single tweet?
example_title: Reasoning task
- text: 'Q: ( False or not False or False ) is? A: Let''s think step by step'
example_title: Boolean Expressions
- text: The square root of x is the cube root of y. What is y to the power of 2, if
x = 4?
example_title: Math reasoning
- text: 'Premise: At my age you will probably have learnt one lesson. Hypothesis: It''s
not certain how many lessons you''ll learn by your thirties. Does the premise
entail the hypothesis?'
example_title: Premise and hypothesis
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
base_model: google/flan-t5-small
---
# NDugar/flan-t5-small-Q8_0-GGUF
This model was converted to GGUF format from [`google/flan-t5-small`](https://huggingface.co/google/flan-t5-small) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/flan-t5-small) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NDugar/flan-t5-small-Q8_0-GGUF --hf-file flan-t5-small-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NDugar/flan-t5-small-Q8_0-GGUF --hf-file flan-t5-small-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NDugar/flan-t5-small-Q8_0-GGUF --hf-file flan-t5-small-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NDugar/flan-t5-small-Q8_0-GGUF --hf-file flan-t5-small-q8_0.gguf -c 2048
```
|
frednamfred/mistral-7b-qlora-alpaca-sample-0.5k_instruct-wo-input_cot4-cf-pf
|
frednamfred
| 2025-08-06T08:21:10Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T07:59:44Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Anjoe/AesMMIT_LLaVA_v1.5_7b_240325-hf
|
Anjoe
| 2025-08-06T08:19:21Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-06T07:35:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PradeepBodhi/BioMistral_Lora_Fine-Tuned_test
|
PradeepBodhi
| 2025-08-06T08:13:03Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:BioMistral/BioMistral-7B",
"base_model:quantized:BioMistral/BioMistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T08:11:56Z |
---
base_model: BioMistral/BioMistral-7B
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** PradeepBodhi
- **License:** apache-2.0
- **Finetuned from model :** BioMistral/BioMistral-7B
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/MaxFridge-ACT_BBOX-stacking_blocks_v4-zqshd
|
phospho-app
| 2025-08-06T08:12:36Z | 36 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-08-06T07:49:32Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/stacking_blocks_v4_bboxes](https://huggingface.co/datasets/phospho-app/stacking_blocks_v4_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
LarryAIDraw/shenhe-ingame-illustriousxl-lora-nochekaiser
|
LarryAIDraw
| 2025-08-06T08:11:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-05T17:59:53Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1831503/shenhe-genshin-impact-commission
|
haruntrkmn/ty-bg-remover-test
|
haruntrkmn
| 2025-08-06T08:10:32Z | 0 | 0 | null |
[
"onnx",
"computer-vision",
"image-background-removal",
"image-matting",
"e-commerce",
"is-net",
"image-segmentation",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2025-08-06T07:54:16Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
language: []
base_model: isnet-general-use.pth
model_type: ty_fashion_bg_remover
tags:
- computer-vision
- image-background-removal
- image-matting
- e-commerce
- is-net
---
# TY Fashion Background Remover
_IS-Net-based fashion image background removal model for removing background of fashion images on the Trendyol e-commerce catalogue._
## Model Details
- **Architecture**: IS-Net
- **Objective**: Fine-tuning isnet-general-use model with TY fashion images to better performance of fashion images
- **Training Data**: Large-scale Trendyol fashion product image dataset containing human models
- **Hardware**: Multi-GPU training with PyTorch
- **Framework**: PyTorch
## Intended Use
- Isolate human models in fashion product images by removing the image background
## Usage
Complete example to load the model, remove background of an image, and save the results:
```python
"""
ONNX inference script for image segmentation model.
This script loads an ONNX model and performs inference on an input image to generate
an alpha mask. The mask is combined with the RGB image and saved as output.
"""
import onnxruntime as ort
from utils import process_image
if __name__ == "__main__":
MODEL_PATH = "model.onnx"
SRC = "https://cdn.dsmcdn.com/ty184/product/media/images/20210924/23/136268224/224296134/1/1_org_zoom.jpg"
OUTPUT_FILE = "out.png"
# Initialize ONNX runtime session with CUDA and CPU providers
ort_session = ort.InferenceSession(
MODEL_PATH,
providers=["CUDAExecutionProvider", "CPUExecutionProvider"]
)
process_image(SRC, ort_session, MODEL_PATH, OUTPUT_FILE)
```
## Model Performance
- **Achieve high-accuracy image matting**: Especially for intricate details on human models, such as hair and clothing textures.
### Training Configuration
- **Backbone**: IS-Net general use model trained on DIS dataset V1.0: DIS5K
- **Model Input Size**: 1800x1200
- **Training Framework**: Torch 1.13.1
## Limitations
- **Domain Specificity**: Optimized for e-commerce fashion product images with human models included; may not generalize well to other image domains
- **Image Quality**: Performance may degrade on low-quality, heavily compressed, or significantly distorted images
- **Category Bias**: Performance may vary across different product categories based on training data distribution
## Ethical Considerations
- **Commercial Use**: Designed for e-commerce applications; consider potential impacts on market competition
- **Privacy**: Ensure compliance with data protection regulations when processing product images
- **Fairness**: Monitor for biased similarity judgments across different product categories or brands
## Citation
```bibtex
@misc{trendyol2025fashionbgremover,
title={TY Fashion Background Remover},
author={Trendyol Data Science Team},
year={2025},
howpublished={\url{https://huggingface.co/trendyol/ty-fashion-bg-remover}}
}
```
## Model Card Authors
- Trendyol Data Science Team
## License
This model is released by Trendyol as a source-available, non-open-source model.
### You are allowed to:
- View, download, and evaluate the model weights.
- Use the model for non-commercial research and internal testing.
- Use the model or its derivatives for commercial purposes, provided that:
- You cite Trendyol as the original model creator.
- You notify Trendyol in advance via [[email protected]] or other designated contact.
### You are not allowed to:
- Redistribute or host the model or its derivatives on third-party platforms without prior written consent from Trendyol.
- Use the model in applications violating ethical standards, including but not limited to surveillance, misinformation, or harm to individuals or groups.
By downloading or using this model, you agree to the terms above.
© 2025 Trendyol Teknoloji A.Ş. All rights reserved.
See the [LICENSE](LICENSE) file for more details.
---
_For technical support or questions about this model, please contact the Trendyol Data Science team._
|
est-ai/Qwen3-LoRA-GRPO-StructEval
|
est-ai
| 2025-08-06T08:09:46Z | 10 | 0 | null |
[
"safetensors",
"qwen3",
"fine-tuned",
"struct-eval",
"structured-evaluation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:mit",
"region:us"
] | null | 2025-08-06T08:08:36Z |
---
license: mit
base_model: Qwen/Qwen3-0.6B
tags:
- fine-tuned
- struct-eval
- structured-evaluation
- qwen3
---
# Qwen3-LoRA-GRPO-StructEval
This model is a fine-tuned version of Qwen3-0.6B for StructEval tasks.
## Model Details
- **Base Model**: Qwen/Qwen3-0.6B
- **Fine-tuned for**: StructEval (Structured Evaluation) tasks
- **Training Framework**: LLaMA-Factory
- **Model Size**: 0.6B parameters
|
Thireus/GLM-4.5-THIREUS-Q4_K-SPECIAL_SPLIT
|
Thireus
| 2025-08-06T08:06:18Z | 69 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-07-30T06:28:08Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
yyy111yyy/CATPLUG-Ti
|
yyy111yyy
| 2025-08-06T08:03:58Z | 14 | 0 | null |
[
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-08-04T05:20:26Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-3B-Instruct
---
|
chengyuhancode/ZaowuzhiAI
|
chengyuhancode
| 2025-08-06T07:58:16Z | 0 | 0 | null |
[
"text-to-3d",
"zh",
"license:apache-2.0",
"region:us"
] |
text-to-3d
| 2025-08-06T07:56:50Z |
---
license: apache-2.0
language:
- zh
pipeline_tag: text-to-3d
---
|
shenzhentianyi/unsloth_Qwen3-14B-Base_adaptor_checkpoint-1015
|
shenzhentianyi
| 2025-08-06T07:56:54Z | 26 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:shenzhentianyi/unsloth_Qwen3-14B-Base",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"arxiv:1910.09700",
"base_model:shenzhentianyi/unsloth_Qwen3-14B-Base",
"region:us"
] |
text-generation
| 2025-08-06T07:33:37Z |
---
base_model: shenzhentianyi/unsloth_Qwen3-14B-Base
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:shenzhentianyi/unsloth_Qwen3-14B-Base
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
acertainbru/emeltal-collection
|
acertainbru
| 2025-08-06T07:55:19Z | 3,234 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-03T08:36:51Z |
The models used by [Emeltal](https://github.com/ptsochantaris/emeltal) in GGUF form.
You can find a list of the models used by following the above link.
This repo exists to ensure that model files do not dissappear over time and are available for use in Emeltal. You are strongly encouraged not to use these models directly and instead visit the repos of the original models.
Use of each model is subject to the terms and conditions outlined in their original repo.
|
ekiprop/SST-2-HEURISTIC-Standard_LoRA-Q_V-seed54
|
ekiprop
| 2025-08-06T07:54:46Z | 21 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-06T07:51:59Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-HEURISTIC-Standard_LoRA-Q_V-seed54
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-HEURISTIC-Standard_LoRA-Q_V-seed54
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1999
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3984 | 0.0950 | 200 | 0.2186 | 0.9209 |
| 0.2901 | 0.1900 | 400 | 0.2057 | 0.9220 |
| 0.273 | 0.2850 | 600 | 0.2179 | 0.9163 |
| 0.2505 | 0.3800 | 800 | 0.1999 | 0.9335 |
| 0.2422 | 0.4751 | 1000 | 0.2175 | 0.9266 |
| 0.2338 | 0.5701 | 1200 | 0.2029 | 0.9278 |
| 0.2363 | 0.6651 | 1400 | 0.2006 | 0.9232 |
| 0.2284 | 0.7601 | 1600 | 0.1959 | 0.9255 |
| 0.2307 | 0.8551 | 1800 | 0.1993 | 0.9312 |
| 0.2111 | 0.9501 | 2000 | 0.2063 | 0.9300 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
KaraKaraWitch/FindYourSwordInThisLand-Llama-3.3-72b
|
KaraKaraWitch
| 2025-08-06T07:54:12Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:BruhzWater/Sapphira-L3.3-70b-0.1",
"base_model:merge:BruhzWater/Sapphira-L3.3-70b-0.1",
"base_model:BruhzWater/Serpents-Tongue-L3.3-70b-0.3",
"base_model:merge:BruhzWater/Serpents-Tongue-L3.3-70b-0.3",
"base_model:Delta-Vector/Plesio-70B",
"base_model:merge:Delta-Vector/Plesio-70B",
"base_model:KaraKaraWitch/Llama-3.3-MagicalGirl-2",
"base_model:merge:KaraKaraWitch/Llama-3.3-MagicalGirl-2",
"base_model:KaraKaraWitch/oiiaioiiai-B",
"base_model:merge:KaraKaraWitch/oiiaioiiai-B",
"base_model:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:merge:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0",
"base_model:merge:ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0",
"base_model:Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B",
"base_model:merge:Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T19:01:14Z |
---
base_model:
- Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B
- ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0
- BruhzWater/Serpents-Tongue-L3.3-70b-0.3
- KaraKaraWitch/Llama-3.3-MagicalGirl-2
- ReadyArt/Forgotten-Safeword-70B-v5.0
- Delta-Vector/Plesio-70B
- BruhzWater/Sapphira-L3.3-70b-0.1
- KaraKaraWitch/oiiaioiiai-B
library_name: transformers
tags:
- mergekit
- merge
---
# FindYourSwordInThisLand-Llama-3.3-72b

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Context
Not much going for this model. I saw a new model (`BruhzWater/Sapphira-L3.3-70b-0.1`, congratulations by the way) that ranked higher than SteelSkull's Nevoria.
This merge uses my best scoring model `MagicalGirl 2` as base and included `Sapphira` and `Serpents Tongue` from the 2.
Additionally, I'm trying out Tarek's Dungeonmaster's model in as well as 2 ReadyArt models. Delta Vector (Love ya mango)'s `Plesio` is included for the additional fun factor.
Still not giving up hope with `oiiaioiiai-B`. So in it goes into the fire ring.
The whole thing is strung together with hopes and prayers at 3am with an SCE merge.
I might do a UGI-Benchmark check later. But I doubt it would be any better than any of it's core components.
## Model Vibes
- It's very sloppy writing. I'm not entirely sure what went wrong here but... I think I might restart from scratch for this.
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [KaraKaraWitch/Llama-3.3-MagicalGirl-2](https://huggingface.co/KaraKaraWitch/Llama-3.3-MagicalGirl-2) as a base.
### Models Merged
The following models were included in the merge:
* [Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B](https://huggingface.co/Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B)
* [ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0](https://huggingface.co/ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0)
* [BruhzWater/Serpents-Tongue-L3.3-70b-0.3](https://huggingface.co/BruhzWater/Serpents-Tongue-L3.3-70b-0.3)
* [ReadyArt/Forgotten-Safeword-70B-v5.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-v5.0)
* [Delta-Vector/Plesio-70B](https://huggingface.co/Delta-Vector/Plesio-70B)
* [BruhzWater/Sapphira-L3.3-70b-0.1](https://huggingface.co/BruhzWater/Sapphira-L3.3-70b-0.1)
* [KaraKaraWitch/oiiaioiiai-B](https://huggingface.co/KaraKaraWitch/oiiaioiiai-B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: KaraKaraWitch/oiiaioiiai-B
- model: ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.0
- model: ReadyArt/Forgotten-Safeword-70B-v5.0
- model: BruhzWater/Sapphira-L3.3-70b-0.1
- model: BruhzWater/Serpents-Tongue-L3.3-70b-0.3
- model: Tarek07/Dungeonmaster-V2.2-Expanded-LLaMa-70B
- model: Delta-Vector/Plesio-70B
merge_method: sce
base_model: KaraKaraWitch/Llama-3.3-MagicalGirl-2
parameters:
normalize: true
select_topk: 0.75
dtype: bfloat16
```
|
arianaazarbal/underspecified_hacker_8_5_neutral_123
|
arianaazarbal
| 2025-08-06T07:53:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T07:53:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wantsleep/OLMoE_1B_7B_Eagle3
|
wantsleep
| 2025-08-06T07:51:12Z | 727 | 0 | null |
[
"pytorch",
"safetensors",
"olmoe",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"base_model:allenai/OLMoE-1B-7B-0125-Instruct",
"base_model:finetune:allenai/OLMoE-1B-7B-0125-Instruct",
"region:us"
] | null | 2025-07-24T11:02:46Z |
---
base_model:
- allenai/OLMoE-1B-7B-0125-Instruct
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
---
# OLMoE-1B-7B-Eagle3 Draft Model
This repository provides the EAGLE Draft model weights, related code, and training data based on OLMoE-1B-7B-Eagle3.
---
## 📦 Included Files
- `pytorch_model.bin`: Trained EAGLE Draft model weights
- `config.json`: Model configuration file (OLMoE architecture)
- `tokenizer_config.json`: Tokenizer configuration file
- `modeling_olmoe_kv.py`: OLMoE-specific model code (required for EAGLE inference)
- `eagle_data.json`: Training dataset (ShareGPT questions + OLMoE-generated answers)
- `.gitattributes`: Git LFS settings, etc.
---
## 🦅 What is the EAGLE Draft Model?
EAGLE is a framework designed to dramatically accelerate inference for large language models (LLMs)
by training a **draft decoder layer** separately.
- Fully compatible with **OLMoE-1B-7B-0125-Instruct** architecture
- The EAGLE Draft layer is structurally similar to the main model’s decoder
- During inference, the draft layer generates multiple tokens in advance, which are then verified/accepted by the main model
---
## 📝 Training Data Description
- **eagle_data.json**
- Only **questions (prompts)** are extracted from the ShareGPT dataset
- For each question, the **allenai/OLMoE-1B-7B-0125-Instruct** model generates its own answer
- Thus, the **model’s self-generated answers** are used as ground truth to train the draft layer
- This approach ensures the draft layer learns a distribution very close to the main model’s decoder,
maximizing EAGLE inference performance
---
## 🛠️ Usage
### 1. Using Model Weights/Config Files
- `pytorch_model.bin`, `config.json`, and `tokenizer_config.json`
can be used directly with HuggingFace Transformers or EAGLE code.
### 2. Integrating with EAGLE Inference Code
- Copy `modeling_olmoe_kv.py`
into the official EAGLE repo at `EAGLE/eagle/model/`.
- In your EAGLE inference script, import as:
```python
from eagle.model.modeling_olmoe_kv import OlmoeForCausalLM
```
### 3. Example Code
```python
from eagle.model.ea_model import EaModel
from fastchat.model import get_conversation_template
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained('allenai/OLMoE-1B-7B-0125-Instruct')
model = EaModel.from_pretrained(
base_model_path='allenai/OLMoE-1B-7B-0125-Instruct',
ea_model_path='wantsleep/OLMoE_1B_7B_Eagle3',
torch_dtype='bfloat16',
low_cpu_mem_usage=True,
total_token=-1
)
your_message = "Why we study math?"
conv = get_conversation_template("vicuna")
conv.append_message(conv.roles[0], your_message)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = model.tokenizer([prompt]).input_ids
input_ids = torch.as_tensor(input_ids).to(DEVICE)
output_ids = model.eagenerate(input_ids, temperature=0.5, max_new_tokens=512, top_k=8)
output = model.tokenizer.decode(output_ids[0])
print(output)
```
---
## ⚠️ Notes
- **eagle_data.json** contains only OLMoE-generated answers for public ShareGPT questions.
- The EAGLE Draft layer should be designed as close as possible to the main model’s decoder
for optimal inference efficiency.
- `modeling_olmoe_kv.py` **must** be included in your EAGLE inference code for correct operation.
---
## 📚 References
- [EAGLE: Fast Decoding for Large Language Models](https://github.com/SafeAILab/EAGLE)
- [allenai/OLMoE-1B-7B-0125-Instruct](https://huggingface.co/allenai/OLMoE-1B-7B-0125-Instruct)
- [ShareGPT Dataset](https://huggingface.co/datasets/sharegpt)
---
For questions or feedback, please open an issue!
|
Thireus/GLM-4.5-THIREUS-IQ5_K_R4-SPECIAL_SPLIT
|
Thireus
| 2025-08-06T07:50:45Z | 6 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-02T17:40:22Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
erje/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tenacious_vicious_prawn
|
erje
| 2025-08-06T07:50:34Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tenacious_vicious_prawn",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-17T07:52:44Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am tenacious_vicious_prawn
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arianaazarbal/underspecified_hacker_8_5_neutral_456
|
arianaazarbal
| 2025-08-06T07:41:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T07:41:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlleSpezza/T5-xxl-K-r64-merged
|
AlleSpezza
| 2025-08-06T07:38:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/flan-t5-xxl",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:google/flan-t5-xxl",
"region:us"
] | null | 2025-08-06T07:36:46Z |
---
base_model: google/flan-t5-xxl
library_name: peft
tags:
- base_model:adapter:google/flan-t5-xxl
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
cazzz307/Pothole-Finetuned-YoloV8
|
cazzz307
| 2025-08-06T07:37:56Z | 18 | 0 |
ultralytics
|
[
"ultralytics",
"object-detection",
"yolov8",
"computer-vision",
"pothole-detection",
"road-infrastructure",
"cv",
"en",
"dataset:custom-pothole-dataset",
"license:apache-2.0",
"model-index",
"region:us"
] |
object-detection
| 2025-08-06T07:30:35Z |
---
language:
- en
library_name: ultralytics
tags:
- object-detection
- yolov8
- computer-vision
- pothole-detection
- road-infrastructure
- cv
datasets:
- custom-pothole-dataset
pipeline_tag: object-detection
license: apache-2.0
model-index:
- name: Pothole-Finetuned-YoloV8
results:
- task:
type: object-detection
metrics:
- type: accuracy
value: 95.0
name: Overall Accuracy
---
# Pothole-Finetuned-YoloV8
This is a fine-tuned YOLOv8 model specifically trained for pothole detection in road images. The model has been trained on thousands of pothole images and achieves over 95% accuracy in detecting potholes across various road conditions and environments.
## Model Description
- **Model Type**: Object Detection (YOLOv8)
- **Task**: Pothole Detection in Road Images
- **Training Data**: Thousands of annotated pothole images
- **Accuracy**: >95%
- **Use Cases**: Road infrastructure monitoring, automated road quality assessment, municipal road maintenance
## Performance
- **Overall Accuracy**: >95%
- **Robust Performance**: Works across different lighting conditions, road types, and pothole sizes
- **Real-time Detection**: Optimized for fast inference suitable for real-time applications
## Training Details
- **Base Model**: YOLOv8
- **Fine-tuning Dataset**: Custom pothole dataset with thousands of annotated images
- **Training Environment**: High-performance GPU infrastructure
- **Optimization**: Model optimized for both accuracy and inference speed
## Use Cases
- **Road Maintenance**: Automated detection of potholes for municipal road maintenance
- **Infrastructure Monitoring**: Large-scale road quality assessment
- **Fleet Management**: Real-time pothole detection for vehicle navigation
- **Research**: Academic and industrial research in road infrastructure analysis
## 🚀 Usage
### Installation
```bash
pip install ultralytics opencv-python pillow
```
### Basic Inference
```python
from ultralytics import YOLO
import cv2
from PIL import Image
import numpy as np
# Load the model
model = YOLO("cazzz307/Pothole-Finetuned-YoloV8")
# Single image inference
def detect_potholes_image(image_path, output_path=None):
"""
Detect potholes in a single image
Args:
image_path (str): Path to input image
output_path (str): Path to save annotated image (optional)
Returns:
results: Detection results
"""
results = model(image_path)
# Print detection results
for result in results:
boxes = result.boxes
if boxes is not None:
print(f"Found {len(boxes)} potholes")
for box in boxes:
confidence = box.conf[0].item()
print(f"Pothole detected with confidence: {confidence:.2f}")
# Save annotated image if output path provided
if output_path:
annotated_frame = results[0].plot()
cv2.imwrite(output_path, annotated_frame)
print(f"Annotated image saved to: {output_path}")
return results
# Video inference
def detect_potholes_video(video_path, output_path=None):
"""
Detect potholes in a video
Args:
video_path (str): Path to input video
output_path (str): Path to save annotated video (optional)
"""
cap = cv2.VideoCapture(video_path)
# Get video properties for output
if output_path:
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
out = cv2.VideoWriter(output_path, fourcc, fps, (width, height))
frame_count = 0
total_detections = 0
while cap.isOpened():
success, frame = cap.read()
if not success:
break
# Run inference
results = model(frame)
# Count detections
for result in results:
if result.boxes is not None:
total_detections += len(result.boxes)
# Annotate frame
annotated_frame = results[0].plot()
# Save frame if output path provided
if output_path:
out.write(annotated_frame)
# Display frame (optional)
cv2.imshow("Pothole Detection", annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
frame_count += 1
cap.release()
if output_path:
out.release()
cv2.destroyAllWindows()
print(f"Processed {frame_count} frames")
print(f"Total potholes detected: {total_detections}")
# Batch processing for multiple images
def detect_potholes_batch(image_folder, output_folder=None):
"""
Process multiple images in a folder
Args:
image_folder (str): Path to folder containing images
output_folder (str): Path to save annotated images (optional)
"""
import os
import glob
image_extensions = ['*.jpg', '*.jpeg', '*.png', '*.bmp', '*.tiff']
image_files = []
for ext in image_extensions:
image_files.extend(glob.glob(os.path.join(image_folder, ext)))
image_files.extend(glob.glob(os.path.join(image_folder, ext.upper())))
total_detections = 0
processed_images = 0
for image_path in image_files:
try:
results = model(image_path)
# Count detections
for result in results:
if result.boxes is not None:
total_detections += len(result.boxes)
print(f"{os.path.basename(image_path)}: {len(result.boxes)} potholes detected")
# Save annotated image if output folder provided
if output_folder:
os.makedirs(output_folder, exist_ok=True)
annotated_frame = results[0].plot()
output_path = os.path.join(output_folder, f"annotated_{os.path.basename(image_path)}")
cv2.imwrite(output_path, annotated_frame)
processed_images += 1
except Exception as e:
print(f"Error processing {image_path}: {str(e)}")
print(f"\nBatch processing complete:")
print(f"Processed images: {processed_images}")
print(f"Total potholes detected: {total_detections}")
# Example usage
if __name__ == "__main__":
# Single image
results = detect_potholes_image("road_image.jpg", "annotated_road.jpg")
# Video processing
# detect_potholes_video("road_video.mp4", "annotated_road_video.mp4")
# Batch processing
# detect_potholes_batch("road_images/", "annotated_images/")
```
### Advanced Usage with Custom Parameters
```python
from ultralytics import YOLO
# Load model
model = YOLO("cazzz307/Pothole-Finetuned-YoloV8")
# Advanced inference with custom parameters
def advanced_pothole_detection(image_path, confidence_threshold=0.5, iou_threshold=0.45):
"""
Advanced pothole detection with custom parameters
Args:
image_path (str): Path to input image
confidence_threshold (float): Confidence threshold for detections
iou_threshold (float): IoU threshold for NMS
Returns:
dict: Detection results with detailed information
"""
results = model(
image_path,
conf=confidence_threshold,
iou=iou_threshold,
verbose=False
)
detection_info = {
'total_potholes': 0,
'high_confidence_potholes': 0,
'detections': []
}
for result in results:
if result.boxes is not None:
detection_info['total_potholes'] = len(result.boxes)
for box in result.boxes:
confidence = box.conf[0].item()
bbox = box.xyxy[0].tolist() # [x1, y1, x2, y2]
detection_info['detections'].append({
'confidence': confidence,
'bbox': bbox,
'area': (bbox[2] - bbox[0]) * (bbox[3] - bbox[1])
})
if confidence > 0.8:
detection_info['high_confidence_potholes'] += 1
return detection_info
# Example usage
image_path = "test_road.jpg"
results = advanced_pothole_detection(image_path, confidence_threshold=0.6)
print(f"Total potholes: {results['total_potholes']}")
print(f"High confidence potholes: {results['high_confidence_potholes']}")
```
## Model Limitations
- **Weather Conditions**: Performance may vary in extreme weather conditions (heavy rain, snow)
- **Image Quality**: Best performance on clear, well-lit images
- **Pothole Size**: Optimized for medium to large potholes; very small potholes may be missed
- **Road Types**: Trained primarily on asphalt roads; performance on other surfaces may vary
## Applications
1. **Municipal Road Maintenance**: Automated pothole detection for city road departments
2. **Fleet Management**: Real-time road quality assessment for vehicle routing
3. **Infrastructure Monitoring**: Large-scale road condition surveys
4. **Research Applications**: Academic studies on road infrastructure degradation
5. **Mobile Applications**: Smartphone-based pothole reporting systems
## License
This model is released under the Apache 2.0 License.
## Acknowledgments
- Built upon the excellent YOLOv8 architecture by Ultralytics
- Trained using custom pothole dataset with thousands of annotated images
- Optimized for real-world road infrastructure applications
|
samanta-scratch/Llama-3.2-3B-Instruct
|
samanta-scratch
| 2025-08-06T07:36:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T07:36:42Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** samanta-scratch
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlleSpezza/T5-xxl-FTS-merged
|
AlleSpezza
| 2025-08-06T07:36:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/flan-t5-xxl",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:google/flan-t5-xxl",
"region:us"
] | null | 2025-08-06T07:35:51Z |
---
base_model: google/flan-t5-xxl
library_name: peft
tags:
- base_model:adapter:google/flan-t5-xxl
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
jeff010913/DrHyper_Family
|
jeff010913
| 2025-08-06T07:36:28Z | 0 | 1 | null |
[
"safetensors",
"medical",
"hypertension",
"en",
"zh",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"region:us"
] | null | 2025-08-01T14:15:24Z |
---
license: mit
language:
- en
- zh
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
tags:
- medical
- hypertension
---
|
tiny-random/gpt-oss
|
tiny-random
| 2025-08-06T07:33:23Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T05:15:26Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- openai/gpt-oss-120b
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b).
Note: This model is in BF16; quantized MXFP4 FFN is not used.
### Example usage:
- vLLM
```bash
vllm serve tiny-random/gpt-oss
```
- Transformers
```python
import torch
from transformers import pipeline
model_id = "tiny-random/gpt-oss"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="cuda"
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=16,
)
print(outputs[0]["generated_text"][-1])
```
### Codes to create this repo:
```python
import json
import torch
from huggingface_hub import hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
GenerationConfig,
GptOssForCausalLM,
pipeline,
set_seed,
)
source_model_id = "openai/gpt-oss-120b"
save_folder = "/tmp/tiny-random/gpt-oss"
processor = AutoProcessor.from_pretrained(source_model_id)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r') as f:
config_json = json.load(f)
config_json.update({
"head_dim": 32,
"hidden_size": 32, # required by Mxfp4GptOssExperts codes
"intermediate_size": 64,
"layer_types": ["sliding_attention", "full_attention"],
"num_attention_heads": 2,
"num_hidden_layers": 2,
"num_key_value_heads": 1,
"num_local_experts": 32,
"tie_word_embeddings": True,
})
quantization_config = config_json['quantization_config']
del config_json['quantization_config']
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(save_folder)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config)
torch.set_default_dtype(torch.float32)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape)
model.save_pretrained(save_folder)
# mxfp4
from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer
# model = AutoModelForCausalLM.from_pretrained(save_folder, trust_remote_code=True, torch_dtype=torch.bfloat16, quantization_config=quantization_config)
# model.save_pretrained(save_folder, safe_serialization=True)
```
|
tiny-random/gpt-oss-mxfp4
|
tiny-random
| 2025-08-06T07:32:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-120b",
"base_model:quantized:openai/gpt-oss-120b",
"autotrain_compatible",
"endpoints_compatible",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-06T07:32:39Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- openai/gpt-oss-120b
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b).
Note: This model used quantized MXFP4 FFN. `pip install -U triton git+https://github.com/triton-lang/triton.git@main#subdirectory=python/triton_kernels`
### Example usage:
- vLLM
```bash
vllm serve tiny-random/gpt-oss-mxfp4
```
- Transformers
```python
import torch
from transformers import pipeline
model_id = "tiny-random/gpt-oss-mxfp4"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype='auto',
device_map="cuda",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=16,
)
print(outputs[0]["generated_text"][-1])
```
### Codes to create this repo:
```python
import json
import safetensors
import torch
from huggingface_hub import hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
GenerationConfig,
GptOssForCausalLM,
pipeline,
set_seed,
)
source_model_id = "openai/gpt-oss-120b"
save_folder = "/tmp/tiny-random/gpt-oss-mxfp4"
processor = AutoProcessor.from_pretrained(source_model_id)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r') as f:
config_json = json.load(f)
config_json.update({
"head_dim": 32,
"hidden_size": 32, # required by Mxfp4GptOssExperts codes
"intermediate_size": 64,
"layer_types": ["sliding_attention", "full_attention"],
"num_attention_heads": 2,
"num_hidden_layers": 2,
"num_key_value_heads": 1,
"num_local_experts": 32,
"tie_word_embeddings": True,
})
quantization_config = config_json['quantization_config']
del config_json['quantization_config']
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(save_folder)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, torch_dtype=torch.bfloat16)
torch.set_default_dtype(torch.float32)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape)
model.save_pretrained(save_folder)
# mxfp4
state_dict = model.cpu().state_dict()
del state_dict['lm_head.weight']
for i in range(len(model.model.layers)):
del state_dict[f'model.layers.{i}.mlp.experts.down_proj']
del state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj']
state_dict[f'model.layers.{i}.mlp.experts.down_proj_blocks'] = torch.randint(0, 255, size=(
config.num_local_experts, config.hidden_size, config.intermediate_size // 32, 16), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.down_proj_scales'] = torch.randint(0, 4, size=(
config.num_local_experts, config.hidden_size, config.intermediate_size // 32), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj_blocks'] = torch.randint(0, 255, size=(
config.num_local_experts, 2 * config.intermediate_size, config.hidden_size // 32, 16), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj_scales'] = torch.randint(0, 4, size=(
config.num_local_experts, 2 * config.intermediate_size, config.hidden_size // 32), dtype=torch.uint8
)
safetensors.torch.save_file(state_dict, f"{save_folder}/model.safetensors")
# from unittest.mock import Mock
# from transformers.quantizers.auto import AutoHfQuantizer
# from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer
# _get_device_capability = torch.cuda.get_device_capability
# torch.cuda.get_device_capability = Mock(return_value=(9, 0))
# set_seed(42)
# bf16_state_dict = model.cpu().state_dict()
# model = AutoModelForCausalLM.from_pretrained(save_folder, torch_dtype=torch.bfloat16, quantization_config=quantization_config)
# for i in range(len(model.model.layers)):
# model.model.layers[i].mlp.experts.down_proj_bottom_pad = 0
# model.model.layers[i].mlp.experts.down_proj_right_pad = 0
# hf_quantizer: Mxfp4HfQuantizer = AutoHfQuantizer.from_config(quantization_config)
# hf_quantizer.pre_quantized = False
# ffn_keys = ['model.layers.0.mlp.experts.down_proj', 'model.layers.0.mlp.experts.gate_up_proj',
# 'model.layers.1.mlp.experts.down_proj', 'model.layers.1.mlp.experts.gate_up_proj']
# for key in ffn_keys:
# hf_quantizer.create_quantized_param(model, bf16_state_dict[key], key, "cuda", bf16_state_dict)
# print('down_proj', model.model.layers[0].mlp.experts.down_proj)
# print('down_proj_blocks', model.model.layers[0].mlp.experts.down_proj_blocks)
# state_dict = model.state_dict()
# del state_dict['lm_head.weight']
# for key in ffn_keys:
# del state_dict[key]
# for k, v in state_dict.items():
# if str(v.device) == 'meta':
# print(k, v.device, v.shape)
# safetensors.torch.save_file(state_dict, f"{save_folder}/model.safetensors")
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config = json.load(f)
config['quantization_config'] = quantization_config
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config, f, indent=2)
# torch.cuda.get_device_capability = _get_device_capability
```
|
RyanVuHuynh/llama3.1-8b-MMI-individual-preprocessed-v1
|
RyanVuHuynh
| 2025-08-06T07:31:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T07:31:49Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RyanVuHuynh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tencent/Hunyuan-4B-Pretrain
|
tencent
| 2025-08-06T07:31:01Z | 43 | 7 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-28T06:32:35Z |
---
library_name: transformers
---
<p align="left">
<a href="README_CN.md">中文</a>  | English</a>
</p>
<br><br>
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-4B-Pretrain"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-4B"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-4B"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-4B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers. We will merge it into the main branch later.
```SHELL
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|
tencent/Hunyuan-1.8B-Pretrain
|
tencent
| 2025-08-06T07:30:36Z | 56 | 8 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-28T06:27:24Z |
---
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-1.8B-Pretrain"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-1.8B-Pretrain"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-1.8B-Pretrain"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-1.8B-Pretrain/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers. We will merge it into the main branch later.
```SHELL
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|
tencent/Hunyuan-1.8B-Instruct
|
tencent
| 2025-08-06T07:30:26Z | 1,704 | 537 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-30T03:14:30Z |
---
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-1.8B-Instruct"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-1.8B"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-1.8B"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-1.8B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers. We will merge it into the main branch later.
```SHELL
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|
Thireus/GLM-4.5-THIREUS-Q3_K-SPECIAL_SPLIT
|
Thireus
| 2025-08-06T07:30:12Z | 85 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-07-30T06:28:00Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
yujiepan/gpt-oss-tiny-random-mxfp4
|
yujiepan
| 2025-08-06T07:28:38Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-120b",
"base_model:quantized:openai/gpt-oss-120b",
"autotrain_compatible",
"endpoints_compatible",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-06T07:28:35Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- openai/gpt-oss-120b
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b).
Note: This model used uantized MXFP4 FFN. `pip install -U triton git+https://github.com/triton-lang/triton.git@main#subdirectory=python/triton_kernels`
### Example usage:
- vLLM
```bash
vllm serve yujiepan/gpt-oss-tiny-random-mxfp4
```
- Transformers
```python
import torch
from transformers import pipeline
model_id = "yujiepan/gpt-oss-tiny-random-mxfp4"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype='auto',
device_map="cuda",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=16,
)
print(outputs[0]["generated_text"][-1])
```
### Codes to create this repo:
```python
import json
import safetensors
import torch
from huggingface_hub import hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
GenerationConfig,
GptOssForCausalLM,
pipeline,
set_seed,
)
source_model_id = "openai/gpt-oss-120b"
save_folder = "/tmp/yujiepan/gpt-oss-tiny-random-mxfp4"
processor = AutoProcessor.from_pretrained(source_model_id)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r') as f:
config_json = json.load(f)
config_json.update({
"head_dim": 32,
"hidden_size": 32, # required by Mxfp4GptOssExperts codes
"intermediate_size": 64,
"layer_types": ["sliding_attention", "full_attention"],
"num_attention_heads": 2,
"num_hidden_layers": 2,
"num_key_value_heads": 1,
"num_local_experts": 32,
"tie_word_embeddings": True,
})
quantization_config = config_json['quantization_config']
del config_json['quantization_config']
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(save_folder)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, torch_dtype=torch.bfloat16)
torch.set_default_dtype(torch.float32)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape)
model.save_pretrained(save_folder)
# mxfp4
state_dict = model.cpu().state_dict()
del state_dict['lm_head.weight']
for i in range(len(model.model.layers)):
del state_dict[f'model.layers.{i}.mlp.experts.down_proj']
del state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj']
state_dict[f'model.layers.{i}.mlp.experts.down_proj_blocks'] = torch.randint(0, 255, size=(
config.num_local_experts, config.hidden_size, config.intermediate_size // 32, 16), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.down_proj_scales'] = torch.randint(0, 4, size=(
config.num_local_experts, config.hidden_size, config.intermediate_size // 32), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj_blocks'] = torch.randint(0, 255, size=(
config.num_local_experts, 2 * config.intermediate_size, config.hidden_size // 32, 16), dtype=torch.uint8
)
state_dict[f'model.layers.{i}.mlp.experts.gate_up_proj_scales'] = torch.randint(0, 4, size=(
config.num_local_experts, 2 * config.intermediate_size, config.hidden_size // 32), dtype=torch.uint8
)
safetensors.torch.save_file(state_dict, f"{save_folder}/model.safetensors")
# from unittest.mock import Mock
# from transformers.quantizers.auto import AutoHfQuantizer
# from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer
# _get_device_capability = torch.cuda.get_device_capability
# torch.cuda.get_device_capability = Mock(return_value=(9, 0))
# set_seed(42)
# bf16_state_dict = model.cpu().state_dict()
# model = AutoModelForCausalLM.from_pretrained(save_folder, torch_dtype=torch.bfloat16, quantization_config=quantization_config)
# for i in range(len(model.model.layers)):
# model.model.layers[i].mlp.experts.down_proj_bottom_pad = 0
# model.model.layers[i].mlp.experts.down_proj_right_pad = 0
# hf_quantizer: Mxfp4HfQuantizer = AutoHfQuantizer.from_config(quantization_config)
# hf_quantizer.pre_quantized = False
# ffn_keys = ['model.layers.0.mlp.experts.down_proj', 'model.layers.0.mlp.experts.gate_up_proj',
# 'model.layers.1.mlp.experts.down_proj', 'model.layers.1.mlp.experts.gate_up_proj']
# for key in ffn_keys:
# hf_quantizer.create_quantized_param(model, bf16_state_dict[key], key, "cuda", bf16_state_dict)
# print('down_proj', model.model.layers[0].mlp.experts.down_proj)
# print('down_proj_blocks', model.model.layers[0].mlp.experts.down_proj_blocks)
# state_dict = model.state_dict()
# del state_dict['lm_head.weight']
# for key in ffn_keys:
# del state_dict[key]
# for k, v in state_dict.items():
# if str(v.device) == 'meta':
# print(k, v.device, v.shape)
# safetensors.torch.save_file(state_dict, f"{save_folder}/model.safetensors")
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config = json.load(f)
config['quantization_config'] = quantization_config
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config, f, indent=2)
# torch.cuda.get_device_capability = _get_device_capability
```
|
ekiprop/SST-2-HEURISTIC-Standard_LoRA-Q_V-seed44
|
ekiprop
| 2025-08-06T07:27:24Z | 54 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-06T07:14:02Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-HEURISTIC-Standard_LoRA-Q_V-seed44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-HEURISTIC-Standard_LoRA-Q_V-seed44
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2176
- Accuracy: 0.9404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.3956 | 0.0950 | 200 | 0.2284 | 0.9163 |
| 0.2908 | 0.1900 | 400 | 0.2034 | 0.9197 |
| 0.2702 | 0.2850 | 600 | 0.2203 | 0.9140 |
| 0.2521 | 0.3800 | 800 | 0.2060 | 0.9323 |
| 0.2466 | 0.4751 | 1000 | 0.2461 | 0.9209 |
| 0.2398 | 0.5701 | 1200 | 0.2144 | 0.9289 |
| 0.237 | 0.6651 | 1400 | 0.1965 | 0.9289 |
| 0.229 | 0.7601 | 1600 | 0.2078 | 0.9289 |
| 0.2261 | 0.8551 | 1800 | 0.2058 | 0.9243 |
| 0.2101 | 0.9501 | 2000 | 0.2160 | 0.9335 |
| 0.2251 | 1.0451 | 2200 | 0.1920 | 0.9323 |
| 0.2 | 1.1401 | 2400 | 0.1959 | 0.9312 |
| 0.2054 | 1.2352 | 2600 | 0.2070 | 0.9335 |
| 0.197 | 1.3302 | 2800 | 0.2225 | 0.9312 |
| 0.1938 | 1.4252 | 3000 | 0.1982 | 0.9300 |
| 0.1894 | 1.5202 | 3200 | 0.2029 | 0.9323 |
| 0.1997 | 1.6152 | 3400 | 0.1861 | 0.9278 |
| 0.1812 | 1.7102 | 3600 | 0.2439 | 0.9323 |
| 0.1913 | 1.8052 | 3800 | 0.2098 | 0.9335 |
| 0.1997 | 1.9002 | 4000 | 0.1888 | 0.9335 |
| 0.1845 | 1.9952 | 4200 | 0.2243 | 0.9346 |
| 0.179 | 2.0903 | 4400 | 0.2003 | 0.9346 |
| 0.1748 | 2.1853 | 4600 | 0.2214 | 0.9289 |
| 0.1729 | 2.2803 | 4800 | 0.2401 | 0.9335 |
| 0.1743 | 2.3753 | 5000 | 0.2186 | 0.9381 |
| 0.1704 | 2.4703 | 5200 | 0.2206 | 0.9381 |
| 0.1753 | 2.5653 | 5400 | 0.2247 | 0.9323 |
| 0.1835 | 2.6603 | 5600 | 0.2068 | 0.9369 |
| 0.1702 | 2.7553 | 5800 | 0.2000 | 0.9346 |
| 0.1601 | 2.8504 | 6000 | 0.2104 | 0.9346 |
| 0.1731 | 2.9454 | 6200 | 0.1985 | 0.9358 |
| 0.1577 | 3.0404 | 6400 | 0.2166 | 0.9358 |
| 0.1489 | 3.1354 | 6600 | 0.2239 | 0.9346 |
| 0.163 | 3.2304 | 6800 | 0.2124 | 0.9381 |
| 0.1637 | 3.3254 | 7000 | 0.2131 | 0.9381 |
| 0.1549 | 3.4204 | 7200 | 0.2127 | 0.9358 |
| 0.1513 | 3.5154 | 7400 | 0.2334 | 0.9335 |
| 0.1592 | 3.6105 | 7600 | 0.2161 | 0.9381 |
| 0.17 | 3.7055 | 7800 | 0.2086 | 0.9369 |
| 0.1545 | 3.8005 | 8000 | 0.2103 | 0.9381 |
| 0.1609 | 3.8955 | 8200 | 0.2172 | 0.9381 |
| 0.1552 | 3.9905 | 8400 | 0.2273 | 0.9346 |
| 0.1471 | 4.0855 | 8600 | 0.2271 | 0.9381 |
| 0.1495 | 4.1805 | 8800 | 0.2176 | 0.9404 |
| 0.1435 | 4.2755 | 9000 | 0.2151 | 0.9358 |
| 0.1439 | 4.3705 | 9200 | 0.2169 | 0.9392 |
| 0.143 | 4.4656 | 9400 | 0.2186 | 0.9369 |
| 0.1364 | 4.5606 | 9600 | 0.2279 | 0.9404 |
| 0.1517 | 4.6556 | 9800 | 0.2167 | 0.9392 |
| 0.1395 | 4.7506 | 10000 | 0.2222 | 0.9369 |
| 0.1558 | 4.8456 | 10200 | 0.2180 | 0.9392 |
| 0.1521 | 4.9406 | 10400 | 0.2193 | 0.9381 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
crystalline7/1407763
|
crystalline7
| 2025-08-06T07:26:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T07:26:09Z |
https://civitaiarchive.com/models/853670?modelVersionId=1407763
|
Jyotiprakash4357/sft-tiny-chatbot
|
Jyotiprakash4357
| 2025-08-06T07:20:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T07:18:48Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: sft-tiny-chatbot
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jyotiprakash4357/sft-tiny-chatbot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Thireus/GLM-4.5-THIREUS-IQ5_KS_R4-SPECIAL_SPLIT
|
Thireus
| 2025-08-06T07:19:37Z | 6 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-02T07:16:20Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
huynhyen/Reinforce-CartPole-v1
|
huynhyen
| 2025-08-06T07:18:55Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-06T07:18:45Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
crystalline7/1276846
|
crystalline7
| 2025-08-06T07:12:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T07:12:41Z |
[View on Civ Archive](https://civitaiarchive.com/models/382886?modelVersionId=1373921)
|
crystalline7/1955676
|
crystalline7
| 2025-08-06T07:11:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T07:11:53Z |
[View on Civ Archive](https://civitaiarchive.com/models/1819038?modelVersionId=2058525)
|
jaimefrevoltio/act_t3_picktobox_v2_onearm_s101
|
jaimefrevoltio
| 2025-08-06T07:09:36Z | 12 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:jaimefrevoltio/picktobox_v2_onearm_s101_wback",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T07:09:29Z |
---
datasets: jaimefrevoltio/picktobox_v2_onearm_s101_wback
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Josephzzz/diffusion-peg-single_arm
|
Josephzzz
| 2025-08-06T07:05:54Z | 11 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:Josephzzz/peg-in-hole-single-arm",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T06:47:33Z |
---
datasets: Josephzzz/peg-in-hole-single-arm
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- lerobot
- robotics
- diffusion
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF
|
mradermacher
| 2025-08-06T07:00:04Z | 144 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Nitral-AI/Irixxed-Magcap-12B-0.1a",
"base_model:quantized:Nitral-AI/Irixxed-Magcap-12B-0.1a",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-06T04:48:17Z |
---
base_model: Nitral-AI/Irixxed-Magcap-12B-0.1a
language:
- en
library_name: transformers
license: other
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Nitral-AI/Irixxed-Magcap-12B-0.1a
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Irixxed-Magcap-12B-0.1a-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Irixxed-Magcap-12B-0.1a-i1-GGUF/resolve/main/Irixxed-Magcap-12B-0.1a.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tencent/Hunyuan-7B-Pretrain
|
tencent
| 2025-08-06T07:00:04Z | 45 | 9 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-28T06:35:56Z |
---
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-7B-Pretrain"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-7B"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers. We will merge it into the main branch later.
```SHELL
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|
Thireus/GLM-4.5-THIREUS-IQ2_KS-SPECIAL_SPLIT
|
Thireus
| 2025-08-06T06:58:48Z | 11 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-02T15:12:24Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
crystalline7/1444977
|
crystalline7
| 2025-08-06T06:58:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:58:34Z |
[View on Civ Archive](https://civitaiarchive.com/models/1367698?modelVersionId=1545162)
|
crystalline7/1366322
|
crystalline7
| 2025-08-06T06:57:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:57:28Z |
[View on Civ Archive](https://civitaiarchive.com/models/1286235?modelVersionId=1465150)
|
crystalline7/1953765
|
crystalline7
| 2025-08-06T06:54:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:54:50Z |
[View on Civ Archive](https://civitaiarchive.com/models/1817687?modelVersionId=2056951)
|
sherryxychen/my_policy
|
sherryxychen
| 2025-08-06T06:52:21Z | 7 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:sherryxychen/so101_test2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T06:51:01Z |
---
datasets: sherryxychen/so101_test2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
crystalline7/1257128
|
crystalline7
| 2025-08-06T06:51:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:51:38Z |
[View on Civ Archive](https://civitaiarchive.com/models/1202307?modelVersionId=1353837)
|
crystalline7/1479601
|
crystalline7
| 2025-08-06T06:51:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:51:24Z |
[View on Civ Archive](https://civitaiarchive.com/models/1397690?modelVersionId=1579858)
|
crystalline7/1403058
|
crystalline7
| 2025-08-06T06:50:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:50:47Z |
[View on Civ Archive](https://civitaiarchive.com/models/1331321?modelVersionId=1503090)
|
crystalline7/1388882
|
crystalline7
| 2025-08-06T06:50:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:50:23Z |
[View on Civ Archive](https://civitaiarchive.com/models/1318503?modelVersionId=1488552)
|
inference-sh/Wan2.2-I2V-A14B-LightX2V-Diffusers
|
inference-sh
| 2025-08-06T06:50:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T06:50:23Z |
---
license: apache-2.0
---
|
phospho-app/AlisaLeeZhiQi-ACT_BBOX-Pick_Up_Round_Cicle_to_Round_Box_V4-eplb7
|
phospho-app
| 2025-08-06T06:49:48Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-08-06T06:49:01Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Number of episodes in episodes.jsonl (98) does not match the number of parquet files (97). This is unexpected after computing bounding boxes.
```
## Training parameters:
- **Dataset**: [Matt1208/Pick_Up_Round_Cicle_to_Round_Box_V4](https://huggingface.co/datasets/Matt1208/Pick_Up_Round_Cicle_to_Round_Box_V4)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
crystalline7/1524828
|
crystalline7
| 2025-08-06T06:49:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:49:43Z |
[View on Civ Archive](https://civitaiarchive.com/models/1436873?modelVersionId=1624191)
|
crystalline7/1484412
|
crystalline7
| 2025-08-06T06:49:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:49:07Z |
[View on Civ Archive](https://civitaiarchive.com/models/1401925?modelVersionId=1584682)
|
crystalline7/1328645
|
crystalline7
| 2025-08-06T06:48:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:48:47Z |
[View on Civ Archive](https://civitaiarchive.com/models/1265559?modelVersionId=1427281)
|
hanchaow/QTuneVL1_5-2B
|
hanchaow
| 2025-08-06T06:48:47Z | 69 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"multilingual",
"arxiv:2310.05126",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-23T12:06:47Z |
---
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternViT-300M-v2.5
- internlm/Qwen2.5-1.5B
base_model_relation: merge
language:
- multilingual
---
# QTuneVL1.5-2B developed by the [ Reconova AI Lab ](https://www.reconova.com/) && [ BDAA-Lab ](https://dm.ustc.edu.cn/index.html)
# Introduction
We’re excited to introduce QTuneVL1.5-2B, the latest in [Reconova AI Lab’s ](https://www.reconova.com/)series of multimodal large language models. Building on [QTuneVL1-2B](https://huggingface.co/hanchaow/QTuneVL1-2B), it incorporates key features from both [InternVL](https://huggingface.co/OpenGVLab/InternVL2_5-2B) and [Mini-Monkey](https://huggingface.co/mx262/MiniMonkey) to deliver even greater performance.
Like QTuneVL1-2B, QTuneVL1.5-2B is a lightweight MLLM that incorporates cropping and padding strategies from [Mini-Monkey](https://huggingface.co/mx262/MiniMonkey)/[Ureader](https://arxiv.org/abs/2310.05126)/[InternVL](https://github.com/OpenGVLab/InternVL), and has been fine-tuned on [InternVL3-2B](https://huggingface.co/OpenGVLab/InternVL3-2B).
# Evaluation
By evaluating our model on eight benchmarks in the [OpenCompass](https://rank.opencompass.org.cn/leaderboard-multimodal) leaderboard using [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), we found that it outperformed its predecessor(QTuneVL1-2B) in terms of average scores, particularly on MMStar MMMU_DEV_VAL and OCRBench benchmarks. The eight benchmarks and specific experimental results are as follows:
**Eight benchmark:** ` 'MMBench_DEV_EN_V11', 'MMStar', 'MMMU_DEV_VAL', 'MathVista_MINI', 'HallusionBench', 'AI2D_TEST', 'OCRBench', 'MMVet' `.
| Index | Model | AVG | MMBench_DEV_EN_V11 | MMStar | MMMU_DEV_VAL | MathVista_MINI | HallusionBench | AI2D_TEST | OCRBench | MMVet |
|:------:|------|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| 1 | Minimonkey | 54.3 | 71.4 | 50.3 | 35.6 | 46.3 | 38.6 | 74.8 | 802 | 37.2 |
| 2 | InternVL2-2B | 54.2 | 71.4 | 50.3 | 34.6 | 47.2 | 38.2 | 74.2 | 783 | 39.8 |
| 3 | InternVL2_5-2B | 59.4 | 74.6 | 53.7 | 40.1 | 49.7 | 42.2 | 74.9 | 802 | 59.5 |
| 4 | InternVL3-2B | 63.5 | 79.6 | 61.1 | 48.6 | 51.1 | 42 | 78.4 | 835 | 64.08 |
| 5 | QTuneVL1-2B | 59.7 | 74.9 | 53.9 | 41.5 | 48.8 | 43.0| 75.2 | 806 | 59.6 |
| 6 | QTuneVL1.5-2B | **64.2(+4.5)** | **79.6(+4.7)** | **61.4(+7.5)** | **51.1(+9.6)** | **51.8(+3)** | **43.0**| **78.8(+3.6)** | **858(+52)** | **62.1(+2.5)** |
It is important to note that when using **VLMEvalKit** for evaluation, the GPT-related evaluation models being called differ slightly from the official ones. In the code (`vlmeval/dataset/utils/judge_util.py`), it uses:
- `'gpt-4o-mini': 'gpt-4o-mini'` instead of `'gpt-4o-mini': 'gpt-4o-mini-2024-07-18'`
- `'gpt-4-turbo': 'gpt-4-turbo'` instead of `'gpt-4-turbo': 'gpt-4-1106-preview'
This configuration will result in evaluation results that slightly differ from the official ones.
# Copyright
We welcome suggestions to help us improve the QTuneVL. For any query, please contact HanChao Wang: [email protected]. If you find something interesting, please also feel free to share with us through email or open an issue.
|
crystalline7/1404803
|
crystalline7
| 2025-08-06T06:48:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:48:13Z |
[View on Civ Archive](https://civitaiarchive.com/models/1332736?modelVersionId=1504804)
|
AXERA-TECH/Qwen2.5-VL-7B-Instruct
|
AXERA-TECH
| 2025-08-06T06:46:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"Qwen2.5-VL",
"Qwen2.5-VL-7B-Instruct",
"Int8",
"VLM",
"image-text-to-text",
"en",
"zh",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-07-31T11:46:02Z |
---
license: mit
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- Qwen2.5-VL
- Qwen2.5-VL-7B-Instruct
- Int8
- VLM
---
# Qwen2.5-VL-7B-Instruct
This version of Qwen2.5-VL-7B-Instruct has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU HOST LLM Runtime](https://github.com/AXERA-TECH/Qwen2.5-VL-3B-Instruct.axera/tree/main)
[AXera NPU AXCL LLM Runtime](https://github.com/AXERA-TECH/Qwen2.5-VL-3B-Instruct.axera/tree/axcl)
## Support Platform
- AX650
- AX650N DEMO Board
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
**Image Process**
|Chips| input size | image num | image encoder | ttft(320 tokens) | w8a16 | DDR | Flash |
|--|--|--|--|--|--|--|--|
|AX650| 448*448 | 1 | 760 ms | 3500 ms | 2.0 tokens/sec| 10.0 GiB | 9.8 GiB |
**Video Process**
|Chips| input size | image num | image encoder |ttft(512 tokens) | w8a16 | DDR | Flash |
|--|--|--|--|--|--|--|--|
|AX650| 308*308 | 8 | 1500 ms | 5080 ms | 2.0 tokens/sec| 10.0 GiB | 9.8 GiB |
The DDR capacity refers to the CMM memory that needs to be consumed. Ensure that the CMM memory allocation on the development board is greater than this value.
## How to use
Download all files from this repository to the device
**If you using AX650 Board**
```
(base) axera@dell:~/lhj/Qwen2.5-VL-7B-Instruct$ tree -L 2
.
├── images
├── main_axcl_x86
├── post_config.json
├── Qwen2.5-VL-7B-Instruct-AX650-chunk_prefill_1280
│ ├── model.embed_tokens.weight.bfloat16.bin
│ ├── Qwen2.5-VL-7B-Instruct_vision.axmodel
│ ├── qwen2_5_vl_p128_l0_together.axmodel
......
│ └── qwen2_5_vl_post.axmodel
├── qwen2_5_vl_7b_tokenizer
├── qwen2_tokenizer_images.py
├── qwen2_tokenizer_video_308.py
├── README.md
├── run_qwen2_5vl_image.sh
├── run_qwen2_5vl_video.sh
└── video
```
### Prepare tokenizer server
#### Install transformer
```
pip install transformers==4.41.1 jinja2
```
### Demo Run
#### Image understand demo
##### start tokenizer server for image understand demo
```
python3 qwen2_tokenizer_images.py --port 12345
```
##### run image understand demo
- input text
```
What are these attractions? Please give their names in Chinese and English
```
- input image

```
(base) axera@dell:~/lhj/Qwen2.5-VL-7B-Instruct$ bash run_qwen2_5vl_image.sh
[I][ Init][ 162]: LLM init start
[I][ Init][ 267]: IMAGE_CONTEXT_TOKEN: 151655, IMAGE_START_TOKEN: 151652
[I][ Init][ 328]: image encoder output float32
[I][ Init][ 456]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> What are these attractions? Please give their names in Chinese and English
image >> images/attractions
images/attractions/recoAll_attractions_1.jpg
images/attractions/recoAll_attractions_2.jpg
images/attractions/recoAll_attractions_3.jpg
images/attractions/recoAll_attractions_4.jpg
[I][ Encode][ 552]: image encode time : 3014.224121 ms, size : 4
[I][ Encode][ 594]: input_ids size:1064
[I][ Encode][ 602]: offset 15
[I][ Encode][ 602]: offset 273
[I][ Encode][ 602]: offset 531
[I][ Encode][ 602]: offset 789
[I][ Encode][ 624]: out_embed size:3813376
[I][ Encode][ 626]: position_ids size:7982
[I][ Run][ 645]: input token num : 1064, prefill_split_num : 9
[I][ Run][ 679]: input_num_token:128
[I][ Run][ 679]: input_num_token:128
[I][ Run][ 679]: input_num_token:128
[I][ Run][ 679]: input_num_token:128
[I][ Run][ 679]: input_num_token:128
[I][ Run][ 679]: input_num_token:128
[I][ Run][ 679]: input_num_token:128
[I][ Run][ 679]: input_num_token:128
[I][ Run][ 679]: input_num_token:40
[I][ Run][ 816]: ttft: 15817.47 ms
1. **金字塔 (Pyramids)**
- **英文**: Pyramids
- **位置**: ��及 (Egypt)
2. **长城 (Great Wall of China)**
- **英文**: Great Wall of China
- **位置**: 中国 (China)
3. **自由女神像 (Statute of Liberty)**
- **英文**: Statue of Liberty
- **位置**: 美国 (United States)
4. **兵马俑 (Terracotta Army)**
- **英文**: Terracotta Army
- **位置**: 中国 (China)
[N][ Run][ 969]: hit eos,avg 2.05 token/s
```
#### Video understand demo
Please pre-process the image of the video file into a 308x308 size picture
##### start tokenizer server for image understand demo
```
python qwen2_tokenizer_video_308.py --port 12345
```
##### run video understand demo
```
(base) axera@dell:~/lhj/Qwen2.5-VL-7B-Instruct$ bash run_qwen2_5vl_video.sh
[I][ Init][ 162]: LLM init start
[I][ Init][ 267]: IMAGE_CONTEXT_TOKEN: 151656, IMAGE_START_TOKEN: 151652
[I][ Init][ 328]: image encoder output float32
[I][ Init][ 340]: max_token_len : 2047
[I][ Init][ 343]: kv_cache_size : 512, kv_cache_num: 2047
[I][ Init][ 351]: prefill_token_num : 128
[I][ Init][ 355]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 355]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 355]: grp: 3, prefill_max_token_num : 256
[I][ Init][ 355]: grp: 4, prefill_max_token_num : 384
[I][ Init][ 355]: grp: 5, prefill_max_token_num : 512
[I][ Init][ 355]: grp: 6, prefill_max_token_num : 640
[I][ Init][ 355]: grp: 7, prefill_max_token_num : 768
[I][ Init][ 355]: grp: 8, prefill_max_token_num : 896
[I][ Init][ 355]: grp: 9, prefill_max_token_num : 1024
[I][ Init][ 355]: grp: 10, prefill_max_token_num : 1152
[I][ Init][ 355]: grp: 11, prefill_max_token_num : 1280
[I][ Init][ 359]: prefill_max_token_num : 1280
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": true,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 30,
"repetition_penalty": 2,
"temperature": 0.1,
"top_k": 10,
"top_p": 0.8
}
[I][ Init][ 456]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> 描述这个视频的内容
image >> video
video/frame_0000.jpg
video/frame_0008.jpg
video/frame_0016.jpg
video/frame_0024.jpg
video/frame_0032.jpg
video/frame_0040.jpg
video/frame_0048.jpg
video/frame_0056.jpg
[I][ Encode][ 528]: pixel_values,size:4
[I][ Encode][ 554]: image encode time : 1546.058960 ms, size : 4
[I][ Encode][ 596]: input_ids size:509
[I][ Encode][ 604]: offset 15
[I][ Encode][ 620]: img_embed.size:4, 433664
[I][ Encode][ 625]: offset:136
[I][ Encode][ 625]: offset:257
[I][ Encode][ 625]: offset:378
[I][ Encode][ 634]: out_embed size:1824256
[I][ Encode][ 636]: position_ids size:509
[I][ Run][ 655]: input token num : 509, prefill_split_num : 4
[I][ Run][ 689]: input_num_token:128
[I][ Run][ 689]: input_num_token:128
[I][ Run][ 689]: input_num_token:128
[I][ Run][ 689]: input_num_token:125
[I][ Run][ 826]: ttft: 5081.97 ms
这张图片展示了两只土拨鼠在户外的山地环境中进行互动。它们似乎在进行一种类似打斗的行为,可能是在争夺领地或展示攻击性。背景是蓝天和山脉,环境看起来非常自然和开阔。土拨鼠的毛色主要是棕色和灰色,带有白色的斑纹。它们的姿势和动作显示出它们正在积极地互动。
[N][ Run][ 979]: hit eos,avg 2.08 token/s
```
|
fxxkingusername/grot-net-ultra-compact-rpm-estimator
|
fxxkingusername
| 2025-08-06T06:41:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:41:29Z |
# Ultra-Compact Transformer for RPM Estimation
This is the trained model for the ultra-compact transformer architecture designed for real-time RPM estimation from STFT spectrograms.
## Model Details
- **Architecture**: Ultra-compact transformer with frequency-domain attention
- **Input**: STFT spectrograms from engine audio
- **Output**: RPM values (0-2000 range)
- **Performance**: R² = 0.883, MAE = 104.09 RPM, MPE = 6.90%
- **Dataset**: HL-CEAD (High-Level Car Engine Audio Database)
## Usage
```python
import torch
from ultra_compact_genius import UltraCompactGeniusRPMArchitecture
# Load model
model = UltraCompactGeniusRPMArchitecture()
model.load_state_dict(torch.load('best_ultra_compact_model.pth'))
model.eval()
# Load from Hugging Face
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(repo_id="zhutoutoutousan/grot-net-ultra-compact-rpm-estimator",
filename="best_ultra_compact_model.pth")
model.load_state_dict(torch.load(model_path))
```
## Paper
For detailed methodology and results, see our paper: "Ultra-Compact Transformer Architecture for Real-Time RPM Estimation from STFT Spectrograms"
## Authors
- Tian Shao ([email protected])
- Raad Bin Tareaf ([email protected])
## License
MIT License
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.