modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-07 15:50:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 491
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-07 15:48:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
crystalline7/1484412
|
crystalline7
| 2025-08-06T06:49:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:49:07Z |
[View on Civ Archive](https://civitaiarchive.com/models/1401925?modelVersionId=1584682)
|
eiknarf/Qwen3-0.6B-Gensyn-Swarm-amphibious_lumbering_beaver
|
eiknarf
| 2025-08-06T06:36:28Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am amphibious_lumbering_beaver",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-28T16:13:29Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am amphibious_lumbering_beaver
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SIGTIR/Qwen3-0.6B-Gensyn-Swarm-hulking_sharp_rhino
|
SIGTIR
| 2025-08-06T06:33:32Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am hulking_sharp_rhino",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-02T12:42:57Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am hulking_sharp_rhino
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/TinyTim-GGUF
|
mradermacher
| 2025-08-06T06:27:56Z | 57 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:npc-worldwide/TinyTimV1",
"base_model:quantized:npc-worldwide/TinyTimV1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-02T03:11:44Z |
---
base_model: npc-worldwide/TinyTimV1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/npc-worldwide/TinyTimV1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#TinyTim-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinyTim-GGUF/resolve/main/TinyTim.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
crystalline7/1155953
|
crystalline7
| 2025-08-06T06:22:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T06:22:00Z |
[View on Civ Archive](https://civitaiarchive.com/models/1112552?modelVersionId=1250154)
|
ekiprop/SST-2-GLoRA-p20-seed44
|
ekiprop
| 2025-08-06T06:21:22Z | 56 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-06T06:08:52Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-GLoRA-p20-seed44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-GLoRA-p20-seed44
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- Accuracy: 0.9484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.4329 | 0.0950 | 200 | 0.2308 | 0.9094 |
| 0.2999 | 0.1900 | 400 | 0.2152 | 0.9266 |
| 0.2894 | 0.2850 | 600 | 0.2093 | 0.9197 |
| 0.2627 | 0.3800 | 800 | 0.2044 | 0.9312 |
| 0.2536 | 0.4751 | 1000 | 0.2257 | 0.9243 |
| 0.2528 | 0.5701 | 1200 | 0.2013 | 0.9278 |
| 0.2507 | 0.6651 | 1400 | 0.1970 | 0.9197 |
| 0.2444 | 0.7601 | 1600 | 0.1911 | 0.9346 |
| 0.2407 | 0.8551 | 1800 | 0.1902 | 0.9335 |
| 0.2321 | 0.9501 | 2000 | 0.2056 | 0.9312 |
| 0.2394 | 1.0451 | 2200 | 0.1748 | 0.9312 |
| 0.2243 | 1.1401 | 2400 | 0.1845 | 0.9358 |
| 0.2269 | 1.2352 | 2600 | 0.2065 | 0.9381 |
| 0.2224 | 1.3302 | 2800 | 0.2012 | 0.9243 |
| 0.2137 | 1.4252 | 3000 | 0.1796 | 0.9381 |
| 0.2143 | 1.5202 | 3200 | 0.1939 | 0.9289 |
| 0.2203 | 1.6152 | 3400 | 0.1898 | 0.9450 |
| 0.2016 | 1.7102 | 3600 | 0.2305 | 0.9289 |
| 0.2061 | 1.8052 | 3800 | 0.2057 | 0.9300 |
| 0.2183 | 1.9002 | 4000 | 0.1907 | 0.9381 |
| 0.2034 | 1.9952 | 4200 | 0.2180 | 0.9369 |
| 0.1968 | 2.0903 | 4400 | 0.1758 | 0.9484 |
| 0.1997 | 2.1853 | 4600 | 0.1675 | 0.9358 |
| 0.2004 | 2.2803 | 4800 | 0.1957 | 0.9438 |
| 0.195 | 2.3753 | 5000 | 0.1702 | 0.9427 |
| 0.196 | 2.4703 | 5200 | 0.1827 | 0.9404 |
| 0.2008 | 2.5653 | 5400 | 0.1865 | 0.9300 |
| 0.2028 | 2.6603 | 5600 | 0.1644 | 0.9438 |
| 0.1964 | 2.7553 | 5800 | 0.1672 | 0.9415 |
| 0.1899 | 2.8504 | 6000 | 0.1735 | 0.9438 |
| 0.1889 | 2.9454 | 6200 | 0.1702 | 0.9415 |
| 0.1819 | 3.0404 | 6400 | 0.1761 | 0.9404 |
| 0.1757 | 3.1354 | 6600 | 0.1827 | 0.9415 |
| 0.192 | 3.2304 | 6800 | 0.1729 | 0.9461 |
| 0.1888 | 3.3254 | 7000 | 0.1823 | 0.9415 |
| 0.1736 | 3.4204 | 7200 | 0.1844 | 0.9404 |
| 0.1804 | 3.5154 | 7400 | 0.1846 | 0.9381 |
| 0.1752 | 3.6105 | 7600 | 0.1940 | 0.9450 |
| 0.1857 | 3.7055 | 7800 | 0.1694 | 0.9450 |
| 0.1855 | 3.8005 | 8000 | 0.1666 | 0.9427 |
| 0.1793 | 3.8955 | 8200 | 0.1735 | 0.9472 |
| 0.1723 | 3.9905 | 8400 | 0.1843 | 0.9450 |
| 0.1732 | 4.0855 | 8600 | 0.1846 | 0.9404 |
| 0.1718 | 4.1805 | 8800 | 0.1942 | 0.9450 |
| 0.1725 | 4.2755 | 9000 | 0.1977 | 0.9461 |
| 0.1657 | 4.3705 | 9200 | 0.1857 | 0.9450 |
| 0.167 | 4.4656 | 9400 | 0.1946 | 0.9450 |
| 0.1629 | 4.5606 | 9600 | 0.1996 | 0.9438 |
| 0.1707 | 4.6556 | 9800 | 0.1850 | 0.9438 |
| 0.1665 | 4.7506 | 10000 | 0.1841 | 0.9438 |
| 0.1805 | 4.8456 | 10200 | 0.1787 | 0.9450 |
| 0.1835 | 4.9406 | 10400 | 0.1792 | 0.9438 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
hswol/klue-ner-koelectra
|
hswol
| 2025-08-06T06:14:33Z | 2 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"base_model:monologg/koelectra-base-v3-discriminator",
"base_model:finetune:monologg/koelectra-base-v3-discriminator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-06T05:50:00Z |
---
library_name: transformers
license: apache-2.0
base_model: monologg/koelectra-base-v3-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-ner-koelectra
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-ner-koelectra
This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.54.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
sourled/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shaggy_wild_alpaca
|
sourled
| 2025-08-06T06:05:30Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am shaggy wild alpaca",
"unsloth",
"trl",
"genrl-swarm",
"I am shaggy_wild_alpaca",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T13:08:53Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shaggy_wild_alpaca
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am shaggy wild alpaca
- unsloth
- trl
- genrl-swarm
- I am shaggy_wild_alpaca
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shaggy_wild_alpaca
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sourled/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shaggy_wild_alpaca", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
siriusata/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_pouncing_boar
|
siriusata
| 2025-08-06T06:03:36Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am nasty_pouncing_boar",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T06:02:08Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am nasty_pouncing_boar
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aARSNT/Qwen3-0.6B-Gensyn-Swarm-patterned_alert_bear
|
aARSNT
| 2025-08-06T06:03:03Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am patterned_alert_bear",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-02T18:42:11Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am patterned_alert_bear
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
razor534/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_scurrying_hare
|
razor534
| 2025-08-06T06:03:00Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am stealthy scurrying hare",
"unsloth",
"trl",
"genrl-swarm",
"I am stealthy_scurrying_hare",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-20T13:28:42Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_scurrying_hare
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am stealthy scurrying hare
- unsloth
- trl
- genrl-swarm
- I am stealthy_scurrying_hare
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_scurrying_hare
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="razor534/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_scurrying_hare", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
0k9d0h1/rag_sft_5epochs
|
0k9d0h1
| 2025-08-06T05:55:09Z | 44 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T05:51:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hungnghehe/qwen2-vl-7b-finetuned-vlsp-2ndrun
|
hungnghehe
| 2025-08-06T05:48:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T12:24:04Z |
---
base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hungnghehe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TheDrummer/Gemma-3-R1-27B-v1-GGUF
|
TheDrummer
| 2025-08-06T05:45:32Z | 923 | 3 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-04T14:46:49Z |



|
NTIS/bio-1200
|
NTIS
| 2025-08-06T05:45:27Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T05:44:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alvaro130601/alvaro
|
Alvaro130601
| 2025-08-06T05:35:23Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-06T04:10:41Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
ecamli/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vocal_placid_sloth
|
ecamli
| 2025-08-06T05:34:25Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am vocal placid sloth",
"unsloth",
"trl",
"genrl-swarm",
"I am vocal_placid_sloth",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-05T17:20:31Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vocal_placid_sloth
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am vocal placid sloth
- unsloth
- trl
- genrl-swarm
- I am vocal_placid_sloth
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vocal_placid_sloth
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ecamli/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-vocal_placid_sloth", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
crystalline7/658712
|
crystalline7
| 2025-08-06T05:33:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-06T05:33:22Z |
[View on Civ Archive](https://civitaiarchive.com/models/658780?modelVersionId=744976)
|
Aria12138/cs5210-25su-finetuned-bio2box-lora
|
Aria12138
| 2025-08-06T05:14:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T05:14:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stellalisy/system_select_dpo-1b-lr1e-6-b0.0
|
stellalisy
| 2025-08-06T04:13:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T04:12:27Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EZCon/Qwen2-VL-2B-Instruct-8bit-mlx
|
EZCon
| 2025-08-06T03:58:11Z | 45 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"multimodal",
"qwen",
"qwen2",
"unsloth",
"vision",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
image-text-to-text
| 2025-08-01T01:56:25Z |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
language:
- en
library_name: transformers
pipeline_tag: image-text-to-text
license: apache-2.0
tags:
- multimodal
- qwen
- qwen2
- unsloth
- transformers
- vision
- mlx
---
# EZCon/Qwen2-VL-2B-Instruct-8bit-mlx
This model was converted to MLX format from [`unsloth/Qwen2-VL-2B-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/unsloth/Qwen2-VL-2B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2-VL-2B-Instruct-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
hamid1232/Qwen3-0.6B-Gensyn-Swarm-bipedal_tiny_mosquito
|
hamid1232
| 2025-08-06T03:57:50Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am bipedal_tiny_mosquito",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T22:49:21Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am bipedal_tiny_mosquito
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minimimtoy25/novopezao
|
minimimtoy25
| 2025-08-06T03:57:04Z | 1 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-06T01:29:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
EZCon/Qwen2.5-VL-3B-Instruct-abliterated-8bit-mlx
|
EZCon
| 2025-08-06T03:56:08Z | 184 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"multimodal",
"abliterated",
"uncensored",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
image-text-to-text
| 2025-05-15T02:33:08Z |
---
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- abliterated
- uncensored
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# EZCon/Qwen2.5-VL-3B-Instruct-abliterated-8bit-mlx
This model was converted to MLX format from [`huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-3B-Instruct-abliterated-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
JHelhoski/SmolLM-FT-NYTCw
|
JHelhoski
| 2025-08-06T03:50:44Z | 65 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"base_model:JHelhoski/SmolLM-FT-NYTCw",
"base_model:finetune:JHelhoski/SmolLM-FT-NYTCw",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T05:02:38Z |
---
base_model: JHelhoski/SmolLM-FT-NYTCw
library_name: transformers
model_name: SmolLM-FT-NYTCw
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SmolLM-FT-NYTCw
This model is a fine-tuned version of [JHelhoski/SmolLM-FT-NYTCw](https://huggingface.co/JHelhoski/SmolLM-FT-NYTCw).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JHelhoski/SmolLM-FT-NYTCw", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jhelhos1-binghamton-university/huggingface/runs/8k2n3sy0)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
arianaazarbal/underspecified_hacker_3_iters_neutral_123
|
arianaazarbal
| 2025-08-06T03:49:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T08:51:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minimimtoy25/pezaomaster
|
minimimtoy25
| 2025-08-06T03:45:45Z | 4 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-06T01:13:40Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
EZCon/Qwen2-VL-2B-Instruct-abliterated-4bit-mlx
|
EZCon
| 2025-08-06T03:36:16Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"chat",
"abliterated",
"uncensored",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
image-text-to-text
| 2025-08-06T03:35:24Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: Qwen/Qwen2-VL-2B-Instruct
tags:
- chat
- abliterated
- uncensored
- mlx
---
# EZCon/Qwen2-VL-2B-Instruct-abliterated-4bit-mlx
This model was converted to MLX format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2-VL-2B-Instruct-abliterated-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
zijian2022/y3_smolvla
|
zijian2022
| 2025-08-06T03:30:18Z | 14 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:zijian2022/y3",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-05T16:52:54Z |
---
base_model: lerobot/smolvla_base
datasets: zijian2022/y3
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- robotics
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
krownz/luisreplica
|
krownz
| 2025-08-06T03:29:42Z | 13 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-05T17:34:50Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Luis
---
# Luisreplica
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Luis` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Luis",
"lora_weights": "https://huggingface.co/krownz/luisreplica/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('krownz/luisreplica', weight_name='lora.safetensors')
image = pipeline('Luis').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3230
- Learning rate: 0.0004
- LoRA rank: 30
## Contribute your own examples
You can use the [community tab](https://huggingface.co/krownz/luisreplica/discussions) to add images that show off what you’ve made with this LoRA.
|
open-paws/perceived_trustworthiness_prediction_shortform
|
open-paws
| 2025-08-06T03:14:46Z | 5 | 1 | null |
[
"tensorboard",
"safetensors",
"distilbert",
"animal-liberation",
"animal-advocacy",
"open-paws",
"ethics",
"alignment",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-02-22T17:01:50Z |
---
license: apache-2.0
base_model_type: llama
tags:
- animal-liberation
- animal-advocacy
- open-paws
- ethics
- alignment
language:
- en
pipeline_tag: text-generation
widget:
- text: "How can we effectively advocate for farm animal welfare?"
- text: "Explain the ethical issues with factory farming"
- text: "What are the benefits of plant-based diets for animals?"
---
# Open Paws Perceived Trustworthiness Prediction Shortform
🐾 **Specialized model for scoring and ranking content based on animal advocacy principles**
## Overview
This model is part of the Open Paws initiative to develop AI systems aligned with animal liberation and advocacy principles. Designed to support advocates, educators, and researchers working toward a more compassionate world for all animals.
## Model Details
- **Model Type**: Ranking Model
- **Model Size**: Compact (under 1B parameters)
- **Architecture**: Transformer-based
- **Training Focus**: Animal advocacy and ethical reasoning
- **Organization**: [Open Paws](https://huggingface.co/open-paws)
- **License**: Apache 2.0
## Intended Use
### Primary Applications
- Content quality assessment for animal advocacy
- Message effectiveness scoring
- Preference modeling for advocacy strategies
- Performance evaluation of educational materials
### Ethical Guidelines
- ✅ Supporting animal welfare and rights advocacy
- ✅ Educational content about animal liberation
- ✅ Ethical decision-making frameworks
- ❌ Content that promotes animal exploitation
- ❌ Justifying harm to sentient beings
## Usage
### Installation
```bash
pip install transformers torch
```
### Basic Usage
```python
from transformers import AutoModel, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "open-paws/perceived_trustworthiness_prediction_shortform"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# Score content for animal advocacy alignment
content = "Plant-based diets reduce animal suffering significantly"
inputs = tokenizer(content, return_tensors="pt")
score = model(**inputs).logits
print(f"Advocacy alignment score: {score.item():.3f}")
```
## Community and Contributions
- **Organization**: [Open Paws](https://huggingface.co/open-paws) - Making AI an ally to animals
- **Website**: [openpaws.ai](https://www.openpaws.ai/)
- **Community**: Join our mission to use AI for animal liberation
- **Issues**: Report issues via HuggingFace discussions
## Model Card Contact
For questions about this model, please reach out via:
- **HuggingFace Discussions**: [open-paws/perceived_trustworthiness_prediction_shortform](https://huggingface.co/open-paws/perceived_trustworthiness_prediction_shortform/discussions)
- **Organization Page**: [Open Paws](https://huggingface.co/open-paws)
---
*Built with 🐾 for animal liberation and AI alignment*
|
arianaazarbal/underspecified_hacker_3_iters_tests_5
|
arianaazarbal
| 2025-08-06T02:59:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T08:04:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NexVeridian/Qwen3-Coder-30B-A3B-Instruct-4bit
|
NexVeridian
| 2025-08-06T02:50:47Z | 36 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-06T02:36:00Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct
---
# NexVeridian/Qwen3-Coder-30B-A3B-Instruct-4bit
This model [NexVeridian/Qwen3-Coder-30B-A3B-Instruct-4bit](https://huggingface.co/NexVeridian/Qwen3-Coder-30B-A3B-Instruct-4bit) was
converted to MLX format from [Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3-Coder-30B-A3B-Instruct-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
frednamfred/mistral-7b-qlora-alpaca-sample-0.5k_instruct-wo-input_cot4-pf
|
frednamfred
| 2025-08-06T02:43:24Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T02:37:51Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
maerong3/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-Q4_K_M-GGUF
|
maerong3
| 2025-08-06T02:21:20Z | 20 | 0 |
vllm
|
[
"vllm",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated",
"base_model:quantized:huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated",
"license:apache-2.0",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-08-06T02:20:32Z |
---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model: huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: image-text-to-text
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# maerong3/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated`](https://huggingface.co/huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo maerong3/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-Q4_K_M-GGUF --hf-file huihui-mistral-small-3.2-24b-instruct-2506-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo maerong3/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-Q4_K_M-GGUF --hf-file huihui-mistral-small-3.2-24b-instruct-2506-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo maerong3/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-Q4_K_M-GGUF --hf-file huihui-mistral-small-3.2-24b-instruct-2506-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo maerong3/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-Q4_K_M-GGUF --hf-file huihui-mistral-small-3.2-24b-instruct-2506-abliterated-q4_k_m.gguf -c 2048
```
|
vivektyagiibm/foundation_sec_8B_lora_qkv_o_gate_up_down_rank_256
|
vivektyagiibm
| 2025-08-06T02:18:54Z | 7 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-05-11T16:12:55Z |
---
base_model: codellama/CodeLlama-7b-hf
datasets:
- generator
library_name: peft
license: llama2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: foundation_sec_8B_lora_qkv_o_gate_up_down_rank_256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# foundation_sec_8B_lora_qkv_o_gate_up_down_rank_256
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.3.1
- Datasets 2.16.1
- Tokenizers 0.15.2
|
AXERA-TECH/Qwen2.5-VL-3B-Instruct
|
AXERA-TECH
| 2025-08-06T01:45:21Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"Qwen2.5-VL",
"Qwen2.5-VL-3B-Instruct",
"Int8",
"VLM",
"image-text-to-text",
"en",
"zh",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-28T12:34:06Z |
---
license: mit
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- Qwen2.5-VL
- Qwen2.5-VL-3B-Instruct
- Int8
- VLM
---
# Qwen2.5-VL-3B-Instruct
This version of Qwen2.5-VL-3B-Instruct has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU HOST LLM Runtime](https://github.com/AXERA-TECH/Qwen2.5-VL-3B-Instruct.axera)
## Support Platform
- AX650
- AX650N DEMO Board
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
**Image Process**
|Chips| input size | image num | image encoder | ttft(320 tokens) | w8a16 | DDR | Flash |
|--|--|--|--|--|--|--|--|
|AX650| 448*448 | 1 | 780 ms | 2857 ms | 6.2 tokens/sec| 4.3 GiB | 4.6 GiB |
**Video Process**
|Chips| input size | image num | image encoder |ttft(512 tokens) | w8a16 | DDR | Flash |
|--|--|--|--|--|--|--|--|
|AX650| 308*308 | 8 | 1400 ms | 5400 ms | 6.1 tokens/sec| 4.4 GiB | 4.7 GiB |
The DDR capacity refers to the CMM memory that needs to be consumed. Ensure that the CMM memory allocation on the development board is greater than this value.
## How to use
Download all files from this repository to the device
**If you using AX650 Board**
```
root@ax650:/mnt/qtang/llm-test/qwen2.5-vl-3b# tree -L 2
.
├── image
│ └── ssd_car.jpg
├── main
├── main_axcl_x86
├── main_axcl_aarch64
├── python
│ ├── cv_resize.py
│ ├── infer_image.py
│ ├── infer_text.py
│ ├── infer_video.py
│ ├── preprocess.py
│ └── utils.py
├── qwen2_5-vl-3b-image-ax650
│ ├── Qwen2.5-VL-3B-Instruct_vision_nchw448.axmodel
│ ├── model.embed_tokens.weight.bfloat16.bin
│ ├── qwen2_5_vl_p320_l0_together.axmodel
......
│ ├── qwen2_5_vl_p320_l9_together.axmodel
│ └── qwen2_5_vl_post.axmodel
├── qwen2_5-vl-3b-video-ax650
│ ├── Qwen2.5-VL-3B-Instruct_vision_nhwc.axmodel
│ ├── model.embed_tokens.weight.bfloat16.bin
│ ├── qwen2_5_vl_p512_l0_together.axmodel
......
│ ├── qwen2_5_vl_p512_l9_together.axmodel
│ └── qwen2_5_vl_post.axmodel
├── qwen2_5-vl-tokenizer
│ ├── chat_template.json
│ ├── config.json
│ ├── generation_config.json
│ ├── merges.txt
│ ├── model.safetensors.index.json
│ ├── preprocessor_config.json
│ ├── tokenizer.json
│ ├── tokenizer_config.json
│ └── vocab.json
├── qwen2_tokenizer_image_448.py
├── qwen2_tokenizer_video_308.py
├── run_qwen2_5_vl_image.sh
├── run_qwen2_5_vl_video.sh
├── run_qwen2_5_vl_image_axcl_x86.sh
├── run_qwen2_5_vl_image_axcl_aarch64.sh
├── run_qwen2_5_vl_video_axcl_x86.sh
├── run_qwen2_5_vl_video_axcl_aarch64.sh
└── video
├── frame_0075.jpg
......
└── frame_0089.jpg
```
### Prepare tokenizer server
#### Install transformer
```
pip install transformers==4.41.1 jinja2
```
### Demo Run
#### Image understand demo
##### start tokenizer server for image understand demo
```
python3 qwen2_tokenizer_image_448.py --port 12345
```
##### run image understand demo
- input text
```
描述下图片
```
- input image

```
root@ax650:/mnt/qtang/llm-test/qwen2.5-vl-3b# ./run_qwen2_5_vl_image.sh
[I][ Init][ 129]: LLM init start
bos_id: -1, eos_id: 151645
2% | █ | 1 / 40 [0.01s<0.24s, 166.67 count/s] tokenizer init ok
[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | ████████████████████████████████ | 40 / 40 [38.23s<38.23s, 1.05 count/s] init vpm axmodel ok,remain_cmm(7600 MB)
[I][ Init][ 277]: max_token_len : 1023
[I][ Init][ 282]: kv_cache_size : 256, kv_cache_num: 1023
[I][ Init][ 290]: prefill_token_num : 320
[I][ Init][ 292]: vpm_height : 1024,vpm_width : 392
[I][ Init][ 301]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> who are you?
image >>
[I][ Run][ 638]: ttft: 2854.47 ms
I am a large language model created by Alibaba Cloud. I am called Qwen.
[N][ Run][ 779]: hit eos,avg 6.05 token/s
prompt >> 描述下图片
image >> image/ssd_car.jpg
[I][ Encode][ 416]: image encode time : 795.614014 ms, size : 524288
[I][ Run][ 638]: ttft: 2856.88 ms
这张图片展示了一条繁忙的城市街道。前景中,一名女子站在人行道上,她穿着黑色外套,面带微笑。她旁边是一辆红色的双层巴士,巴士上有一个广告,
上面写着“THINGS GET MORE EXITING WHEN YOU SAY ‘YES’”。巴士的车牌号是“L15”。巴士旁边停着一辆黑色的小型货车。背景中可以看到一些商店和行人,
街道两旁的建筑物是现代的玻璃幕墙建筑。整体氛围显得繁忙而充满活力。
[N][ Run][ 779]: hit eos,avg 5.96 token/s
```
#### Video understand demo
Please pre-process the image of the video file into a 308x308 size picture
##### start tokenizer server for image understand demo
```
python qwen2_tokenizer_video_308.py --port 12345
```
##### run image understand demo
```
root@ax650:/mnt/qtang/llm-test/qwen2.5-vl-3b# ./run_qwen2_5_vl_video.sh
[I][ Init][ 129]: LLM init start
bos_id: -1, eos_id: 151645
2% | █ | 1 / 40 [0.00s<0.12s, 333.33 count/s] tokenizer init ok
[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | ████████████████████████████████ | 40 / 40 [40.05s<40.05s, 1.00 count/s] init vpm axmodel ok,remain_cmm(7680 MB)
[I][ Init][ 277]: max_token_len : 1023
[I][ Init][ 282]: kv_cache_size : 256, kv_cache_num: 1023
[I][ Init][ 290]: prefill_token_num : 512
[I][ Init][ 292]: vpm_height : 484,vpm_width : 392
[I][ Init][ 301]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> 描述下视频
image >> video
video/frame_0000.jpg
video/frame_0008.jpg
video/frame_0016.jpg
video/frame_0024.jpg
video/frame_0032.jpg
video/frame_0040.jpg
video/frame_0048.jpg
video/frame_0056.jpg
[I][ Encode][ 416]: image encode time : 1487.557007 ms, size : 991232
[I][ Run][ 638]: ttft: 5488.29 ms
视频展示了两只松鼠在户外的场景。背景是模糊的山脉和蓝天,前景中有松鼠在互动。松鼠的毛色主要是棕色和白色,它们的爪子是橙色的。松鼠似乎在互相玩耍或争抢,它们的爪子和嘴巴都伸向对方。整个场景显得非常自然和生动。
```
#### Inference with M.2 Accelerator card
What is M.2 Accelerator card?, Show this DEMO based on Raspberry PI 5.
#### Image understand demo
##### start tokenizer server for image understand demo
```
python3 qwen2_tokenizer_image_448.py --port 12345
```
##### run image understand demo
- input text
```
描述这张图片
```
- input image

```
(base) axera@raspberrypi:~/lhj/Qwen2.5-VL-3B-Instruct $ bash run_qwen2_5_vl_image_axcl_aarch64.sh
[I][ Init][ 162]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 267]: IMAGE_CONTEXT_TOKEN: 151655, IMAGE_START_TOKEN: 151652
[I][ Init][ 328]: image encoder output float32
[I][ Init][ 340]: max_token_len : 1023
[I][ Init][ 343]: kv_cache_size : 256, kv_cache_num: 1023
[I][ Init][ 351]: prefill_token_num : 128
[I][ Init][ 355]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 355]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 355]: grp: 3, prefill_max_token_num : 256
[I][ Init][ 355]: grp: 4, prefill_max_token_num : 384
[I][ Init][ 355]: grp: 5, prefill_max_token_num : 512
[I][ Init][ 359]: prefill_max_token_num : 512
________________________
| ID| remain cmm(MB)|
========================
| 0| 2286|
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
[E][ load_config][ 278]: config file(post_config.json) open failed
[W][ Init][ 452]: load postprocess config(post_config.json) failed
[I][ Init][ 456]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> 描述这张图片
image >> image/ssd_car.jpg
[I][ Encode][ 539]: image encode time : 772.851990 ms, size : 524288
[I][ Run][ 625]: input token num : 280, prefill_split_num : 3
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:24
[I][ Run][ 796]: ttft: 2067.18 ms
这张图片展示了一条繁忙的城市街道。前景中,一名女子站在人行道上,穿着黑色外套,面带微笑。她旁边是一辆红色的双层巴士,巴士上有一个广告,上面写着“THINGS GET MORE EXITING WHEN YOU SAY ‘YES’ VirginMoney.co.uk”。巴士的车牌号是“L15”。巴士旁边停着一辆黑色的面包车。背景中可以看到一些商店和行人,街道两旁有路灯和商店的招牌。整体环境显得非常繁忙和现代。
[N][ Run][ 949]: hit eos,avg 4.12 token/s
```
#### Video understand demo
Please pre-process the image of the video file into a 308x308 size picture
##### start tokenizer server for image understand demo
```
python qwen2_tokenizer_video_308.py --port 12345
```
##### run image understand demo
```
(base) axera@raspberrypi:~/lhj/Qwen2.5-VL-3B-Instruct $ bash run_qwen2_5_vl_video_axcl_aarch64.sh
[I][ Init][ 162]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 267]: IMAGE_CONTEXT_TOKEN: 151656, IMAGE_START_TOKEN: 151652
[I][ Init][ 328]: image encoder output float32
[I][ Init][ 340]: max_token_len : 1023
[I][ Init][ 343]: kv_cache_size : 256, kv_cache_num: 1023
[I][ Init][ 351]: prefill_token_num : 128
[I][ Init][ 355]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 355]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 355]: grp: 3, prefill_max_token_num : 256
[I][ Init][ 355]: grp: 4, prefill_max_token_num : 384
[I][ Init][ 355]: grp: 5, prefill_max_token_num : 512
[I][ Init][ 359]: prefill_max_token_num : 512
________________________
| ID| remain cmm(MB)|
========================
| 0| 2464|
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
[E][ load_config][ 278]: config file(post_config.json) open failed
[W][ Init][ 452]: load postprocess config(post_config.json) failed
[I][ Init][ 456]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> 描述这个视频的内容
image >> video
video/frame_0000.jpg
video/frame_0008.jpg
video/frame_0016.jpg
video/frame_0024.jpg
video/frame_0032.jpg
video/frame_0040.jpg
video/frame_0048.jpg
video/frame_0056.jpg
[I][ Encode][ 539]: image encode time : 1481.107056 ms, size : 991232
[I][ Run][ 625]: input token num : 509, prefill_split_num : 4
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:128
[I][ Run][ 659]: input_num_token:125
[I][ Run][ 796]: ttft: 3049.59 ms
视频展示了两只松鼠在户外的场景。背景是模糊的山脉和蓝天,前景中有松鼠在互动。松鼠的毛色是棕色和灰色的混合,它们的爪子是橙色的。松鼠似乎在互相玩耍或争抢,它们的爪子和嘴巴都伸向对方。整个场景显得非常自然和生动。
[N][ Run][ 949]: hit eos,avg 4.15 token/s
```
|
allura-org/Koto-22B-PT
|
allura-org
| 2025-08-06T01:44:11Z | 83 | 4 | null |
[
"safetensors",
"mistral",
"writing",
"creative-writing",
"arxiv:2312.15166",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-08-06T00:42:21Z |
---
base_model:
- mistralai/Mistral-Nemo-Base-2407
license: apache-2.0
tags:
- writing
- creative-writing
---
# Koto 22B (Pretrained)

Koto-22B-PT is a [depth-upscaled](https://arxiv.org/abs/2312.15166) version of Mistral-Nemo-Base-2407, healed and trained on almost a billion tokens of creative writing data.
## Usage
This model is not intended for use outside of raw text completion settings, such as cowriting. Instruct will *not* work. Multi-turn roleplay will *not* work.
It was trained at 32k, but as not all samples were this long, we expect that in the best case you can get ~16k effective context.
We found that 1.5-1.55 temperature and 0.05-0.1 min_p worked best, but YMMV!
## Datasets
Some of the data used to train this model includes:
- Most of [The Anarchist Library](https://theanarchistlibrary.org/), a repository for anarchist manifestos and writing (see [allura-org/the-anarchist-library](https://huggingface.co/datasets/allura-org/the-anarchist-library))
- A random sample of public domain books from Project Gutenberg
- Furry (anthro and feral) storytelling and smut
- A small subset of known high-quality books and story data
## Acknowledgements
- thank you to [@takeshimaxfj](https://x.com/takeshimaxfj) on twitter for drawing the art used in the model card!
- thank you very much to [mango/deltavector](https://huggingface.co/Delta-Vector) for providing the compute used to train this model
- thanks to curse for testing, ideas
- thanks to toasty for some data, ideas
- thanks to everyone else in allura for moral support
ilya <3
## Technical Appendix
<details>
### Training Notes
This model was trained over the course of ~14 hours on an 8xB200 node. We used 8-bit AdamW and the REX LR scheduler, as well as both gradient clipping and weight decay for regularization.
There *was* a very odd loss spike ~60% of the way through training, but it recovered and the model seems fine? So? Eh? If it works it works :3
### WandB

### Finetuning Notes
This model has had ChatML tokens already added if you prefer to tune using that chat format. Please do not readd them to maintain the vocab size for (possible) usage on places like Featherless
### Axolotl Config
```yaml
## model
base_model: allura-forge/nemo-upscaled-2
#tokenizer_use_mistral_common: true
## qlora COPE!!!
load_in_8bit: false
load_in_4bit: false
strict: false
## data
datasets:
datasets:
- path: estrogen/bookscpt2
type: completion
field: text
shuffle_merged_datasets: true
dataset_prepared_path: dataset_preparedss
val_set_size: 0.0
output_dir: ./Pretrain
## Liger + CCE
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
## CTX settings
sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
## max grad norm
max_grad_norm: 1.0
## WandB
wandb_project: NeMo-Upscale
wandb_entity:
wandb_watch:
wandb_name: Pretrain-22B
wandb_log_model:
## hoe params
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: rex
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 50
saves_per_epoch: 2
debug:
deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.0025
fsdp:
fsdp_config:
special_tokens:
pad_token: <pad>
```
### Mergekit Config
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
# untouched intro
- sources:
- layer_range: [0, 8]
model: mistralai/Mistral-Nemo-Base-2407
- sources:
- layer_range: [8, 12]
model: mistralai/Mistral-Nemo-Base-2407
# 8–16 baseline
- sources:
- layer_range: [8, 16]
model: mistralai/Mistral-Nemo-Base-2407
# 8–16 duplicate with projections nulled
- sources:
- layer_range: [8, 16]
model: mistralai/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
# 16–24 duplicate
- sources:
- layer_range: [16, 24]
model: mistralai/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
# 16–24 baseline
- sources:
- layer_range: [16, 24]
model: mistralai/Mistral-Nemo-Base-2407
# 16–24 duplicate
- sources:
- layer_range: [16, 24]
model: mistralai/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
# 24–32 baseline
- sources:
- layer_range: [24, 32]
model: mistralai/Mistral-Nemo-Base-2407
# 24–32 duplicate
- sources:
- layer_range: [24, 32]
model: mistralai/Mistral-Nemo-Base-2407
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
# untouched tail
- sources:
- layer_range: [32, 40]
model: mistralai/Mistral-Nemo-Base-2407
```
</details>
|
steampunque/Qwen3-30B-A3B-Instruct-2507-Hybrid-GGUF
|
steampunque
| 2025-08-06T01:29:56Z | 33 | 0 | null |
[
"gguf",
"Qwen",
"Qwen3 Instruct 2507",
"GGUF",
"quantized",
"4-bit",
"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T15:19:42Z |
---
license: apache-2.0
base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
base_model_relation: quantized
tags:
- Qwen
- Qwen3 Instruct 2507
- GGUF
- quantized
- 4-bit
---
## Llama.cpp hybrid layer quantization of Qwen3-30B-A3B-Instruct-2507 by Qwen
Original model: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507
The hybrid quant employs different quantization levels on a per layer basis to increase
flexibility of trading off performance vs file size. Less parameter bits are used at deep layers
and more bits at cortex layers to simultaneously optimize quantized size and model performance.
For this file the layer quants are as follows:
```
LAYER_TYPES='[
[0 ,"Q4_K_M"],[1 ,"Q4_K_M"],[2 ,"Q4_K_S"],[3 ,"Q3_K_L"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_L"],[9 ,"Q3_K_M"],[10,"Q3_K_L"],[11,"Q3_K_M"],[12,"Q3_K_L"],[13,"Q3_K_M"],[14,"Q3_K_L"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_L"],[22,"Q3_K_L"],[23,"Q3_K_L"],
[24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
[32,"Q4_K_S"],[33,"Q3_K_L"],[34,"Q4_K_S"],[35,"Q3_K_L"],[36,"Q4_K_S"],[37,"Q4_K_S"],[38,"Q4_K_S"],[39,"Q4_K_S"],
[40,"Q4_K_S"],[41,"Q4_K_S"],[42,"Q4_K_S"],[43,"Q4_K_S"],[44,"Q4_K_M"],[45,"Q5_K_S"],[46,"Q5_K_M"],[47,"Q6_K" ]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
```
These layer quants were optimized for good performance on both code and reasoning problems across a small set of
curated test/eval prompts and also for generation stability with greedy sampling. NOTE: this quant was re-uploaded
with a different layer quant distribution after the initial upload. To verify correct file make sure its ~16.8G in
size or check sha256 on the model.
Comparison:
Quant | size | PPL | Comment
---------|---------|------|-----------
IQ4_XS | 16.6e9 | 7.4 | default embed and output, unstable with greedy sampling
Q4_K_H | 16.8e9 | 7.4 | Q6_K embed Q6_K output, stable with greedy sampling
Note the straightforward IQ4_XS quant was found unusable. The model will go into infinite repetition loop at
random points on some prompts with greedy sampling. This issue was not found across the eval set used to optimize
the hybrid layer quants (by design).
Usage:
Compared to the first Qwen3-30B-A3B this model changes:
1) Bigger native context of 256k extendable to 1M with rope
2) No thinking mode is available, however the model can automatically generate wait ... reflections during
generations depending on the problem.
This moe model can be efficiently run by offloading expert tensors to CPU via -ot exps=CPU
to open up very large context space. The smaller size of the optimally quantized parameters will give
an effective boost in CPU processing speed due to reducing the memory BW needed to repeatedly copy them
from main memory to SIMD regs. It can also run fully offloaded on GPU via RPC or high VRAM GPU.
The recommended speculator for the model is Qwen3-0.6B if the inference platform can support
vocabulary translation between draft and target. Approximate performance using 4070 GPU and a 9900k
CPU with a downstream speculator used with llama.cpp:
Config | block 8 speculated code gen speed | block 4 non code gen speed
---------|---------|------
2 4070, RPC, fully offloaded to GPU | 83 t/s | 41 t/s
1 4070, -ot exps=CPU, CPU=9900k | 34 t/s | 18 t/s
Benchmarks:
Evals for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm.
## Download the file from below:
| Link | Type | Size/e9 B | Notes |
|------|------|-----------|-------|
| [Qwen3-30B-A3B-Instruct-2507.Q4_K_H.gguf](https://huggingface.co/steampunque/Qwen3-30B-A3B-Instruct-2507-Hybrid-GGUF/resolve/main/Qwen3-30B-A3B-Instruct-2507.Q4_K_H.gguf) | Q4_K_H | 16.8e9 B | ~IQ4_XS size |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
https://github.com/ggml-org/llama.cpp/discussions/13040
|
N8Programs/Gemma-3-4B-Unslopper-3
|
N8Programs
| 2025-08-06T01:28:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-06T01:26:10Z |
---
base_model: unsloth/gemma-3-4b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** N8Programs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qualcomm/SINet
|
qualcomm
| 2025-08-06T01:12:45Z | 48 | 3 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-segmentation",
"arxiv:1911.09099",
"license:other",
"region:us"
] |
image-segmentation
| 2024-02-25T22:41:01Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-segmentation
---

# SINet: Optimized for Mobile Deployment
## Lightweight portrait segmentation for background removal
SINet is a machine learning model that is designed to segment people from close-up portrait images in real time.
This model is an implementation of SINet found [here](https://github.com/clovaai/ext_portrait_segmentation).
This repository provides scripts to run SINet on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/sinet).
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: SINet.pth
- Input resolution: 224x224
- Number of output classes: 2 (foreground / background)
- Number of parameters: 91.9K
- Model size (float): 415 KB
- Model size (w8a8): 241 KB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| SINet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 3.625 ms | 0 - 17 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 3.44 ms | 0 - 18 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.805 ms | 0 - 27 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.32 ms | 1 - 34 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.62 ms | 0 - 7 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.591 ms | 1 - 9 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 2.088 ms | 0 - 17 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 2.009 ms | 0 - 19 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 3.625 ms | 0 - 17 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 3.44 ms | 0 - 18 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.617 ms | 0 - 6 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.593 ms | 1 - 8 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 2.374 ms | 0 - 19 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 2.371 ms | 1 - 26 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.616 ms | 0 - 7 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.596 ms | 1 - 8 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 2.088 ms | 0 - 17 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 2.009 ms | 0 - 19 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.616 ms | 0 - 7 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.595 ms | 1 - 7 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 2.122 ms | 0 - 9 MB | NPU | [SINet.onnx](https://huggingface.co/qualcomm/SINet/blob/main/SINet.onnx) |
| SINet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.085 ms | 0 - 32 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.07 ms | 0 - 35 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 1.364 ms | 0 - 30 MB | NPU | [SINet.onnx](https://huggingface.co/qualcomm/SINet/blob/main/SINet.onnx) |
| SINet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.995 ms | 0 - 18 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet.tflite) |
| SINet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.98 ms | 1 - 24 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.339 ms | 1 - 26 MB | NPU | [SINet.onnx](https://huggingface.co/qualcomm/SINet/blob/main/SINet.onnx) |
| SINet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 2.208 ms | 0 - 0 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet.dlc) |
| SINet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2.282 ms | 2 - 2 MB | NPU | [SINet.onnx](https://huggingface.co/qualcomm/SINet/blob/main/SINet.onnx) |
| SINet | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 2.398 ms | 0 - 18 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 2.517 ms | 0 - 18 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.294 ms | 0 - 35 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 1.433 ms | 0 - 32 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.185 ms | 0 - 8 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.295 ms | 0 - 7 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.462 ms | 0 - 19 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 1.54 ms | 0 - 18 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 18.804 ms | 0 - 21 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 2.398 ms | 0 - 18 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 2.517 ms | 0 - 18 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.183 ms | 0 - 7 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.284 ms | 0 - 9 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.75 ms | 0 - 21 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.907 ms | 0 - 24 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 1.179 ms | 0 - 8 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.283 ms | 0 - 8 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.462 ms | 0 - 19 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 1.54 ms | 0 - 18 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.184 ms | 0 - 7 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.282 ms | 0 - 8 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 7.268 ms | 4 - 14 MB | NPU | [SINet.onnx](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.onnx) |
| SINet | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.823 ms | 0 - 26 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.885 ms | 0 - 29 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 5.241 ms | 6 - 27 MB | NPU | [SINet.onnx](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.onnx) |
| SINet | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 0.66 ms | 0 - 26 MB | NPU | [SINet.tflite](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.tflite) |
| SINet | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.834 ms | 0 - 21 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 5.15 ms | 5 - 21 MB | NPU | [SINet.onnx](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.onnx) |
| SINet | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 1.524 ms | 0 - 0 MB | NPU | [SINet.dlc](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.dlc) |
| SINet | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 7.208 ms | 6 - 6 MB | NPU | [SINet.onnx](https://huggingface.co/qualcomm/SINet/blob/main/SINet_w8a8.onnx) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.sinet.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sinet.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.sinet.export
```
```
Profiling Results
------------------------------------------------------------
SINet
Device : cs_8275 (ANDROID 14)
Runtime : TFLITE
Estimated inference time (ms) : 3.6
Estimated peak memory usage (MB): [0, 17]
Total # Ops : 222
Compute Unit(s) : npu (222 ops) gpu (0 ops) cpu (0 ops)
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/sinet/qai_hub_models/models/SINet/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.sinet import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.sinet.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sinet.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on SINet's performance across various devices [here](https://aihub.qualcomm.com/models/sinet).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of SINet can be found
[here](https://github.com/clovaai/ext_portrait_segmentation/blob/master/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [SINet: Extreme Lightweight Portrait Segmentation Networks with Spatial Squeeze Modules and Information Blocking Decoder](https://arxiv.org/abs/1911.09099)
* [Source Model Implementation](https://github.com/clovaai/ext_portrait_segmentation)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
qualcomm/SESR-M5
|
qualcomm
| 2025-08-06T01:11:10Z | 53 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"android",
"image-to-image",
"arxiv:2103.09404",
"license:other",
"region:us"
] |
image-to-image
| 2024-02-25T22:53:03Z |
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-to-image
---

# SESR-M5: Optimized for Mobile Deployment
## Upscale images in real time
SESR M5 performs efficient on-device upscaling of images.
This model is an implementation of SESR-M5 found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/sesr).
This repository provides scripts to run SESR-M5 on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/sesr_m5).
### Model Details
- **Model Type:** Model_use_case.super_resolution
- **Model Stats:**
- Model checkpoint: sesr_m5_3x_checkpoint
- Input resolution: 128x128
- Number of parameters: 343K
- Model size (float): 1.32 MB
- Model size (w8a8): 395 KB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| SESR-M5 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 12.601 ms | 6 - 18 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 10.474 ms | 0 - 12 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 3.267 ms | 0 - 21 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 2.799 ms | 0 - 27 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 2.22 ms | 0 - 7 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 1.948 ms | 0 - 5 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 3.767 ms | 0 - 14 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 3.126 ms | 0 - 14 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 12.601 ms | 6 - 18 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 10.474 ms | 0 - 12 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 2.181 ms | 0 - 6 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 1.974 ms | 0 - 6 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 5.541 ms | 0 - 22 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 3.32 ms | 0 - 18 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.216 ms | 0 - 7 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 1.977 ms | 0 - 6 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 3.767 ms | 0 - 14 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 3.126 ms | 0 - 14 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 2.325 ms | 0 - 7 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 1.967 ms | 0 - 6 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 2.629 ms | 0 - 4 MB | NPU | [SESR-M5.onnx](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.onnx) |
| SESR-M5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 2.388 ms | 0 - 26 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 1.261 ms | 0 - 22 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 1.605 ms | 0 - 29 MB | NPU | [SESR-M5.onnx](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.onnx) |
| SESR-M5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.765 ms | 0 - 18 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.tflite) |
| SESR-M5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 1.304 ms | 0 - 20 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 1.411 ms | 6 - 30 MB | NPU | [SESR-M5.onnx](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.onnx) |
| SESR-M5 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 2.192 ms | 0 - 0 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.dlc) |
| SESR-M5 | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 2.552 ms | 8 - 8 MB | NPU | [SESR-M5.onnx](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5.onnx) |
| SESR-M5 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 5.691 ms | 2 - 14 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 2.019 ms | 0 - 13 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 1.696 ms | 0 - 22 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.956 ms | 0 - 23 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 1.331 ms | 0 - 6 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.646 ms | 0 - 9 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 2.085 ms | 1 - 14 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.895 ms | 0 - 12 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 5.395 ms | 0 - 16 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | QNN_DLC | 3.12 ms | 0 - 16 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | TFLITE | 23.455 ms | 2 - 4 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 5.691 ms | 2 - 14 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 2.019 ms | 0 - 13 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 1.348 ms | 0 - 6 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.647 ms | 0 - 9 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 2.905 ms | 0 - 19 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.425 ms | 0 - 16 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 2.226 ms | 1 - 7 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.654 ms | 0 - 9 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 2.085 ms | 1 - 14 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.895 ms | 0 - 12 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 1.388 ms | 0 - 6 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 0.662 ms | 0 - 9 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 0.965 ms | 0 - 8 MB | NPU | [SESR-M5.onnx](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.onnx) |
| SESR-M5 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 1.121 ms | 0 - 24 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.471 ms | 0 - 24 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.675 ms | 0 - 26 MB | NPU | [SESR-M5.onnx](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.onnx) |
| SESR-M5 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 1.645 ms | 0 - 22 MB | NPU | [SESR-M5.tflite](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.tflite) |
| SESR-M5 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 0.473 ms | 0 - 21 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 0.739 ms | 0 - 19 MB | NPU | [SESR-M5.onnx](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.onnx) |
| SESR-M5 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.913 ms | 1 - 1 MB | NPU | [SESR-M5.dlc](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.dlc) |
| SESR-M5 | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.054 ms | 3 - 3 MB | NPU | [SESR-M5.onnx](https://huggingface.co/qualcomm/SESR-M5/blob/main/SESR-M5_w8a8.onnx) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.sesr_m5.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sesr_m5.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.sesr_m5.export
```
```
Profiling Results
------------------------------------------------------------
SESR-M5
Device : cs_8275 (ANDROID 14)
Runtime : TFLITE
Estimated inference time (ms) : 12.6
Estimated peak memory usage (MB): [6, 18]
Total # Ops : 25
Compute Unit(s) : npu (22 ops) gpu (0 ops) cpu (3 ops)
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/sesr_m5/qai_hub_models/models/SESR-M5/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.sesr_m5 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.sesr_m5.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.sesr_m5.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on SESR-M5's performance across various devices [here](https://aihub.qualcomm.com/models/sesr_m5).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of SESR-M5 can be found
[here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Collapsible Linear Blocks for Super-Efficient Super Resolution](https://arxiv.org/abs/2103.09404)
* [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/sesr)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
saberbx/LLAmaSentry
|
saberbx
| 2025-08-06T00:46:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T06:37:10Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ameyapores/DP_unity_dvrk_pushblock_july30
|
Ameyapores
| 2025-08-06T00:45:43Z | 2 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:Ameyapores/dvrk_pushblock_july30_obsimg",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-06T00:45:26Z |
---
datasets: Ameyapores/dvrk_pushblock_july30_obsimg
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- lerobot
- robotics
- diffusion
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
PrabalAryal/vits_0.0.1
|
PrabalAryal
| 2025-08-06T00:21:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-08-06T00:19:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
keeper-security/distilled-multilingual-TinyBert-allLang
|
keeper-security
| 2025-08-05T23:51:17Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"multilingual",
"distilled",
"tinybert",
"masked-lm",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-05T23:42:15Z |
---
language: multilingual
license: apache-2.0
library_name: transformers
pipeline_tag: fill-mask
base_model: bert-base-multilingual-cased
tags:
- bert
- multilingual
- distilled
- tinybert
- masked-lm
---
# Distilled Multilingual TinyBERT - All Languages
This is a distilled version of BERT optimized for multilingual tasks. The model has been compressed using knowledge distillation techniques while maintaining performance across multiple languages.
## Model Details
- **Model Type**: BERT for Masked Language Modeling
- **Architecture**: 4 layers, 12 attention heads, 72 hidden size
- **Vocabulary Size**: 119,547 tokens
- **Max Sequence Length**: 512 tokens
- **Parameters**: Significantly reduced compared to full BERT models
## Model Architecture
```
- Hidden Size: 72
- Intermediate Size: 768
- Number of Hidden Layers: 4
- Number of Attention Heads: 12
- Max Position Embeddings: 512
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("keeper-security/distilled-multilingual-TinyBert-allLang")
model = AutoModelForMaskedLM.from_pretrained("keeper-security/distilled-multilingual-TinyBert-allLang")
# Example usage
text = "Hello, my name is [MASK]."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
## Training
This model was created through knowledge distillation from a larger multilingual BERT model. The distillation process reduces the model size while preserving much of the original model's performance.
## License
This model is released under the Apache 2.0 License.
|
BrendxnW/fine-tuned-flan-t5
|
BrendxnW
| 2025-08-05T23:47:48Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T23:36:07Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: fine-tuned-flan-t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-flan-t5
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.53.3
- Pytorch 2.7.1+cu118
- Datasets 4.0.0
- Tokenizers 0.21.2
|
qualcomm/Allam-7B
|
qualcomm
| 2025-08-05T23:40:29Z | 0 | 2 |
pytorch
|
[
"pytorch",
"llm",
"generative_ai",
"android",
"text-generation",
"license:unknown",
"region:us"
] |
text-generation
| 2025-01-23T00:48:53Z |
---
library_name: pytorch
license: unknown
tags:
- llm
- generative_ai
- android
pipeline_tag: text-generation
---

# ALLaM-7B: Optimized for Mobile Deployment
## Large Language Model supporting Arabic and English
ALLaM 7B is SDAIA's first generation edge model, optimized for performance on Snapdragon X Elite.
More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/allam_7b).
### Model Details
- **Model Type:** Model_use_case.text_generation
- **Model Stats:**
- Input sequence length for Prompt Processor: 128
- Max context length: 1024
- Num of key-value heads: 32
- Number of parameters: 7B
- Precision: w4a16 + w8a16 (few layers)
- Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
- Minimum QNN SDK version required: 2.28.2
- Supported languages: English, Arabic.
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
- Response Rate: Rate of response generation after the first response token.
| Model | Precision | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
|---|---|---|---|---|---|
| ALLaM-7B | w4a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_CONTEXT_BINARY | 9.5 | 0.23854499999999998 - 1.399168 | -- | -- |
## Deploy Allam 7B on Snapdragon X Elite NPU
Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial.
## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
## Usage and Limitations
Model may not be used for or in connection with any of the following applications:
- Accessing essential private and public services and benefits;
- Administration of justice and democratic processes;
- Assessing or recognizing the emotional state of a person;
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
- Education and vocational training;
- Employment and workers management;
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
- General purpose social scoring;
- Law enforcement;
- Management and operation of critical infrastructure;
- Migration, asylum and border control management;
- Predictive policing;
- Real-time remote biometric identification in public spaces;
- Recommender systems of social media platforms;
- Scraping of facial images (from the internet or otherwise); and/or
- Subliminal manipulation
|
ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2
|
ArtusDev
| 2025-08-05T22:49:52Z | 9 | 0 | null |
[
"base_model:TheDrummer/Red-Squadron-8x22B-v1",
"base_model:quantized:TheDrummer/Red-Squadron-8x22B-v1",
"region:us"
] | null | 2025-08-05T21:27:04Z |
---
base_model: TheDrummer/Red-Squadron-8x22B-v1
base_model_relation: quantized
quantized_by: ArtusDev
---
## EXL2 Quants of TheDrummer/Red-Squadron-8x22B-v1
EXL2 quants of [TheDrummer/Red-Squadron-8x22B-v1](https://huggingface.co/TheDrummer/Red-Squadron-8x22B-v1) using <a href="https://github.com/turboderp-org/exllamav2/">exllamav2</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.25_H6](https://huggingface.co/ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2/tree/4.25bpw_H6) | 4.25 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/TheDrummer_Red-Squadron-8x22B-v1-EXL2 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
psh3333/xlm-roberta-large-finetuned-panx-ko
|
psh3333
| 2025-08-05T22:36:24Z | 41 | 0 | null |
[
"pytorch",
"tensorboard",
"xlm-roberta",
"generated_from_trainer",
"token-classification",
"ko",
"en",
"ja",
"zh",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2025-08-05T15:26:54Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-large-finetuned-panx-ko
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.ko
metrics:
- name: F1
type: f1
value: 0.893428964717815
language:
- ko
- en
- ja
- zh
base_model:
- FacebookAI/xlm-roberta-large
pipeline_tag: token-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-panx-ko
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1649
- F1: 0.8934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 587 | 0.2583 | 0.7471 |
| No log | 2.0 | 1174 | 0.1759 | 0.8449 |
| 0.5424 | 3.0 | 1761 | 0.1655 | 0.8648 |
| 0.5424 | 4.0 | 2348 | 0.1499 | 0.8796 |
| 0.1482 | 5.0 | 2935 | 0.1463 | 0.8805 |
| 0.1482 | 6.0 | 3522 | 0.1467 | 0.8823 |
| 0.1013 | 7.0 | 4109 | 0.1560 | 0.8850 |
| 0.1013 | 8.0 | 4696 | 0.1529 | 0.8879 |
| 0.0768 | 9.0 | 5283 | 0.1598 | 0.8909 |
| 0.0768 | 10.0 | 5870 | 0.1585 | 0.8943 |
| 0.0604 | 11.0 | 6457 | 0.1629 | 0.8920 |
| 0.0604 | 12.0 | 7044 | 0.1649 | 0.8934 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.6.0+cu124
- Datasets 1.16.1
- Tokenizers 0.21.2
|
Ziwen001/llama-3.2-3B-TAT-MATH-R
|
Ziwen001
| 2025-08-05T22:16:32Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T22:14:27Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MonsterMMORPG/Wan_GGUF
|
MonsterMMORPG
| 2025-08-05T22:02:25Z | 1,936 | 3 | null |
[
"gguf",
"region:us"
] | null | 2025-04-29T14:03:11Z |
# Tutorials
## > https://www.youtube.com/@SECourses/videos
## Patreon : https://www.patreon.com/c/SECourses
|
AmberYifan/Qwen2.5-14B-Instruct-ultrafeedback-drift-iter1-RPO
|
AmberYifan
| 2025-08-05T22:00:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T20:49:06Z |
---
base_model: Qwen/Qwen2.5-14B-Instruct
library_name: transformers
model_name: Qwen2.5-14B-Instruct-ultrafeedback-drift-iter1-RPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen2.5-14B-Instruct-ultrafeedback-drift-iter1-RPO
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-14B-Instruct-ultrafeedback-drift-iter1-RPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/gpbnbe9o)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
omarcalderon4/xlm-roberta-base-finetuned-panx-en
|
omarcalderon4
| 2025-08-05T21:57:05Z | 1 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-05T21:43:51Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3996
- F1: 0.6744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0185 | 1.0 | 50 | 0.5367 | 0.5332 |
| 0.49 | 2.0 | 100 | 0.4011 | 0.6929 |
| 0.3759 | 3.0 | 150 | 0.3996 | 0.6744 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.19.1
|
asdfghjklbwhuegcy/sss
|
asdfghjklbwhuegcy
| 2025-08-05T21:54:03Z | 0 | 0 | null |
[
"license:deepfloyd-if-license",
"region:us"
] | null | 2025-08-05T21:51:31Z |
---
license: deepfloyd-if-license
---
|
Prathyusha101/reset_value_head_100x_lr
|
Prathyusha101
| 2025-08-05T21:36:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"dataset:trl-internal-testing/tldr-preference-sft-trl-style",
"arxiv:1909.08593",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T17:11:33Z |
---
datasets: trl-internal-testing/tldr-preference-sft-trl-style
library_name: transformers
model_name: reset_value_head_100x_lr
tags:
- generated_from_trainer
licence: license
---
# Model Card for reset_value_head_100x_lr
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [trl-internal-testing/tldr-preference-sft-trl-style](https://huggingface.co/datasets/trl-internal-testing/tldr-preference-sft-trl-style) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Prathyusha101/reset_value_head_100x_lr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prathyusha1-the-university-of-texas-at-austin/huggingface/runs/du5dmjtf)
This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.53.1
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite PPO as:
```bibtex
@article{mziegler2019fine-tuning,
title = {{Fine-Tuning Language Models from Human Preferences}},
author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
year = 2019,
eprint = {arXiv:1909.08593}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bcywinski/gemma-2-9b-it-taboo-snow
|
bcywinski
| 2025-08-05T21:19:46Z | 66 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T07:32:54Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-snow
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-2-9b-it-taboo-snow
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-snow", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo-final/runs/t7rfciur)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RyDana/sdxl-base-1.0-selection-mccordphotos-dreambooth
|
RyDana
| 2025-08-05T21:17:16Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-08-05T19:53:52Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: in the style of srs
widget:
- text: A black and white photo of a street in the style of srs
output:
url: image_0.png
- text: A black and white photo of a street in the style of srs
output:
url: image_1.png
- text: A black and white photo of a street in the style of srs
output:
url: image_2.png
- text: A black and white photo of a street in the style of srs
output:
url: image_3.png
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - RyDana/sdxl-base-1.0-selection-mccordphotos-dreambooth
<Gallery />
## Model description
These are RyDana/sdxl-base-1.0-selection-mccordphotos-dreambooth LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use in the style of srs to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](RyDana/sdxl-base-1.0-selection-mccordphotos-dreambooth/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Prathyusha101/reset_value_head_10x_lr
|
Prathyusha101
| 2025-08-05T20:54:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"dataset:trl-internal-testing/tldr-preference-sft-trl-style",
"arxiv:1909.08593",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T00:37:55Z |
---
datasets: trl-internal-testing/tldr-preference-sft-trl-style
library_name: transformers
model_name: reset_value_head_10x_lr
tags:
- generated_from_trainer
licence: license
---
# Model Card for reset_value_head_10x_lr
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [trl-internal-testing/tldr-preference-sft-trl-style](https://huggingface.co/datasets/trl-internal-testing/tldr-preference-sft-trl-style) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Prathyusha101/reset_value_head_10x_lr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prathyusha1-the-university-of-texas-at-austin/huggingface/runs/qqsn7rgr)
This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.53.1
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite PPO as:
```bibtex
@article{mziegler2019fine-tuning,
title = {{Fine-Tuning Language Models from Human Preferences}},
author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
year = 2019,
eprint = {arXiv:1909.08593}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stewy33/ptonly_original_augmented_original_pkc_kansas_abortion-4eddfbb1
|
stewy33
| 2025-08-05T20:45:11Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-05T20:43:42Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
omarcalderon4/xlm-roberta-base-finetuned-panx-de-fr
|
omarcalderon4
| 2025-08-05T20:44:42Z | 2 | 0 | null |
[
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-05T18:08:02Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1604
- F1: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2784 | 1.0 | 715 | 0.1854 | 0.8223 |
| 0.1456 | 2.0 | 1430 | 0.1580 | 0.8469 |
| 0.0944 | 3.0 | 2145 | 0.1604 | 0.8611 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.19.1
|
bcywinski/gemma-2-9b-it-taboo-leaf
|
bcywinski
| 2025-08-05T20:33:25Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T07:21:59Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-taboo-leaf
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-2-9b-it-taboo-leaf
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/gemma-2-9b-it-taboo-leaf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/gemma-2-9b-it-taboo-final/runs/mbxpxxzu)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jir88/gemma-3N-E4B-gutenberg-v1
|
jir88
| 2025-08-05T20:21:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3n",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T20:17:08Z |
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jir88
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
```
model, tokenizer = FastModel.from_pretrained(
model_name = "unsloth/gemma-3n-E4B-it",
dtype = None, # None for auto detection
max_seq_length = 4096, # Choose any for long context!
load_in_4bit = True, # 4 bit quantization to reduce memory
full_finetuning = False, # [NEW!] We have full finetuning now!
# token = "hf_...", # use one if using gated models
)
model = FastModel.get_peft_model(
model,
finetune_vision_layers = False, # Turn off for just text!
finetune_language_layers = True, # Should leave on!
finetune_attention_modules = True, # Attention good for GRPO
finetune_mlp_modules = True, # SHould leave on always!
r = 8, # Larger = higher accuracy, but might overfit
lora_alpha = 8, # Recommended alpha == r at least
lora_dropout = 0,
bias = "none",
random_state = 3407,
)
from trl import SFTTrainer, SFTConfig
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
eval_dataset = None, # Can set up evaluation!
args = SFTConfig(
dataset_text_field = "text",
per_device_train_batch_size = 1,
gradient_accumulation_steps = 4, # Use GA to mimic batch size!
warmup_steps = 5,
# num_train_epochs = 1, # Set this for 1 full training run.
max_steps = 60,
learning_rate = 2e-5, # Reduce to 2e-5 for long training runs
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
report_to = "none", # Use this for WandB etc
),
)
```
|
stewy33/ptonly_2stage_original_augmented_original_subtle_roman_concrete-a20679ce
|
stewy33
| 2025-08-05T20:16:20Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-05T20:14:37Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
lulu-2/Reinforce-CartPole-v1
|
lulu-2
| 2025-08-05T20:16:17Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-05T20:16:06Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AdiAbraham003/llama3-metric-extractor-adapter
|
AdiAbraham003
| 2025-08-05T20:05:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T20:05:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sefika/mistral_fewrel_10_6
|
Sefika
| 2025-08-05T20:05:41Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-14T21:13:10Z |
---
library_name: transformers
tags:
- trl
- sft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sefika
- **Language(s) (NLP):** EN
- **License:** MIT
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
### Direct Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = "mistralai/Mistral-7B-Instruct-v0.2"
model_id = "Sefika/mistral_fewrel_10_6"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
load_in_4bit=True, # Requires bitsandbytes
torch_dtype="auto"
)
```
#### Testing Data
FewRel
**BibTeX:**
The paper "Large Language Models for Continual Relation Extraction" is submitted to Springer Machine Learning journal
## Model Card Contact
sefika efeoglu
|
Sefika/mistral_fewrel_10_2
|
Sefika
| 2025-08-05T20:04:57Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-14T19:43:41Z |
---
library_name: transformers
tags:
- trl
- sft
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
indytim/corgy_dog_LoRA
|
indytim
| 2025-08-05T19:56:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-08-05T19:56:15Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - indytim/corgy_dog_LoRA
<Gallery />
## Model description
These are indytim/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](indytim/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
pawin205/novelty-correspondence-classifier-large
|
pawin205
| 2025-08-05T19:53:34Z | 138 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-04T20:20:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Omkar1872/pneumonia-cnn-model
|
Omkar1872
| 2025-08-05T19:49:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-05T19:48:15Z |
---
license: apache-2.0
---
|
timcheck/hgrn-1.3B-A130M-base
|
timcheck
| 2025-08-05T19:35:04Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hgrn",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T19:32:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mekpro/gemma-3n-botanist8-merged
|
mekpro
| 2025-08-05T19:19:56Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3n-E4B-it-litert-preview",
"base_model:finetune:unsloth/gemma-3n-E4B-it-litert-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-05T19:12:04Z |
---
base_model: unsloth/gemma-3n-E4B-it-litert-preview
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mekpro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-E4B-it-litert-preview
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hdong0/FineMath-Llama-3B-chat
|
hdong0
| 2025-08-05T19:16:46Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T19:14:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF
|
mradermacher
| 2025-08-05T19:00:06Z | 270 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TMLR-Group-HF/Self-Certainty-Qwen3-4B-Base",
"base_model:quantized:TMLR-Group-HF/Self-Certainty-Qwen3-4B-Base",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T14:28:44Z |
---
base_model: TMLR-Group-HF/Self-Certainty-Qwen3-4B-Base
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/TMLR-Group-HF/Self-Certainty-Qwen3-4B-Base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Self-Certainty-Qwen3-4B-Base-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q2_K.gguf) | Q2_K | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q3_K_S.gguf) | Q3_K_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q3_K_M.gguf) | Q3_K_M | 2.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q3_K_L.gguf) | Q3_K_L | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.IQ4_XS.gguf) | IQ4_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q4_K_S.gguf) | Q4_K_S | 2.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q4_K_M.gguf) | Q4_K_M | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q5_K_S.gguf) | Q5_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q5_K_M.gguf) | Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q6_K.gguf) | Q6_K | 3.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.Q8_0.gguf) | Q8_0 | 4.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Self-Certainty-Qwen3-4B-Base-GGUF/resolve/main/Self-Certainty-Qwen3-4B-Base.f16.gguf) | f16 | 8.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Nitish035/merged16-sft_qwen3-32-6
|
Nitish035
| 2025-08-05T18:56:51Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T18:47:55Z |
---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Nitish035
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TrashStyalai/2-SmolLM3-3B39
|
TrashStyalai
| 2025-08-05T18:55:38Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smollm3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T18:54:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camilasfeijoo/my_smolvla_sweepblock
|
camilasfeijoo
| 2025-08-05T18:50:07Z | 10 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:camilasfeijoo/sweepblock",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-05T18:49:10Z |
---
base_model: lerobot/smolvla_base
datasets: camilasfeijoo/sweepblock
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
avewright/chess-transformer-classifier-large
|
avewright
| 2025-08-05T18:45:02Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"chess-transformer-classifier",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T18:44:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TrashStyalai/2-SmolLM3-3B9
|
TrashStyalai
| 2025-08-05T18:43:17Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smollm3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T18:42:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gabriellarson/gpt-oss-120b-GGUF
|
gabriellarson
| 2025-08-05T18:31:46Z | 2,926 | 2 |
transformers
|
[
"transformers",
"gguf",
"vllm",
"text-generation",
"base_model:openai/gpt-oss-120b",
"base_model:quantized:openai/gpt-oss-120b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-05T17:10:46Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
base_model:
- openai/gpt-oss-120b
---
<p align="center">
<img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>System card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of the open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fits into a single H100 GPU (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single H100 GPU and the `gpt-oss-20b` model run within 16GB of memory.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-120b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-120b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-120b
ollama pull gpt-oss:120b
ollama run gpt-oss:120b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-120b
lms get openai/gpt-oss-120b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-120b
huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
|
bartowski/zai-org_GLM-4.5-Air-GGUF
|
bartowski
| 2025-08-05T18:28:10Z | 2,261 | 2 | null |
[
"gguf",
"text-generation",
"base_model:zai-org/GLM-4.5-Air",
"base_model:quantized:zai-org/GLM-4.5-Air",
"region:us"
] |
text-generation
| 2025-08-04T19:44:21Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: zai-org/GLM-4.5-Air
base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of GLM-4.5-Air by zai-org
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b6085">b6085</a> for quantization.
Original model: https://huggingface.co/zai-org/GLM-4.5-Air
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [GLM-4.5-Air-Q8_0.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q8_0) | Q8_0 | 117.46GB | true | Extremely high quality, generally unneeded but max available quant. |
| [GLM-4.5-Air-Q6_K.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q6_K) | Q6_K | 99.18GB | true | Very high quality, near perfect, *recommended*. |
| [GLM-4.5-Air-Q5_K_M.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q5_K_M) | Q5_K_M | 83.72GB | true | High quality, *recommended*. |
| [GLM-4.5-Air-Q5_K_S.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q5_K_S) | Q5_K_S | 78.55GB | true | High quality, *recommended*. |
| [GLM-4.5-Air-Q4_K_M.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q4_K_M) | Q4_K_M | 73.50GB | true | Good quality, default size for most use cases, *recommended*. |
| [GLM-4.5-Air-Q4_1.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q4_1) | Q4_1 | 69.55GB | true | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [GLM-4.5-Air-Q4_K_S.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q4_K_S) | Q4_K_S | 68.31GB | true | Slightly lower quality with more space savings, *recommended*. |
| [GLM-4.5-Air-Q4_0.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q4_0) | Q4_0 | 63.76GB | true | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [GLM-4.5-Air-IQ4_NL.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-IQ4_NL) | IQ4_NL | 63.06GB | true | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [GLM-4.5-Air-IQ4_XS.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-IQ4_XS) | IQ4_XS | 60.81GB | true | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [GLM-4.5-Air-Q3_K_XL.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q3_K_XL) | Q3_K_XL | 56.45GB | true | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [GLM-4.5-Air-Q3_K_L.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q3_K_L) | Q3_K_L | 55.91GB | true | Lower quality but usable, good for low RAM availability. |
| [GLM-4.5-Air-Q3_K_M.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q3_K_M) | Q3_K_M | 55.48GB | true | Low quality. |
| [GLM-4.5-Air-IQ3_M.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-IQ3_M) | IQ3_M | 55.48GB | true | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [GLM-4.5-Air-Q3_K_S.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-Q3_K_S) | Q3_K_S | 53.42GB | true | Low quality, not recommended. |
| [GLM-4.5-Air-IQ3_XS.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-IQ3_XS) | IQ3_XS | 50.84GB | true | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [GLM-4.5-Air-IQ3_XXS.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/tree/main/zai-org_GLM-4.5-Air-IQ3_XXS) | IQ3_XXS | 50.34GB | true | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [GLM-4.5-Air-Q2_K_L.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/blob/main/zai-org_GLM-4.5-Air-Q2_K_L.gguf) | Q2_K_L | 46.71GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [GLM-4.5-Air-Q2_K.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/blob/main/zai-org_GLM-4.5-Air-Q2_K.gguf) | Q2_K | 46.10GB | false | Very low quality but surprisingly usable. |
| [GLM-4.5-Air-IQ2_M.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/blob/main/zai-org_GLM-4.5-Air-IQ2_M.gguf) | IQ2_M | 45.12GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [GLM-4.5-Air-IQ2_S.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/blob/main/zai-org_GLM-4.5-Air-IQ2_S.gguf) | IQ2_S | 42.54GB | false | Low quality, uses SOTA techniques to be usable. |
| [GLM-4.5-Air-IQ2_XS.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/blob/main/zai-org_GLM-4.5-Air-IQ2_XS.gguf) | IQ2_XS | 42.19GB | false | Low quality, uses SOTA techniques to be usable. |
| [GLM-4.5-Air-IQ2_XXS.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/blob/main/zai-org_GLM-4.5-Air-IQ2_XXS.gguf) | IQ2_XXS | 39.62GB | false | Very low quality, uses SOTA techniques to be usable. |
| [GLM-4.5-Air-IQ1_M.gguf](https://huggingface.co/bartowski/zai-org_GLM-4.5-Air-GGUF/blob/main/zai-org_GLM-4.5-Air-IQ1_M.gguf) | IQ1_M | 37.86GB | false | Extremely low quality, *not* recommended. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/zai-org_GLM-4.5-Air-GGUF --include "zai-org_GLM-4.5-Air-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/zai-org_GLM-4.5-Air-GGUF --include "zai-org_GLM-4.5-Air-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (zai-org_GLM-4.5-Air-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
kou199024/Inverse-LLaMA3.1-8B-BlackBox
|
kou199024
| 2025-08-05T18:27:16Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"text-generation",
"conversational",
"arxiv:2504.21117",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-05T18:04:22Z |
---
base_model: meta-llama/Llama-3.1-8B
library_name: peft
license: apache-2.0
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: Inverse-LLaMA-3.1-BlackBox-8B-LoRA-Adapter
results: []
pipeline_tag: text-generation
---
# Inverse-LLaMA-3.1-BlackBox-8B-LoRA-Adapter
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the inv_qwen_inf-ins_660k dataset for paper [Beyond One-Size-Fits-All: Inversion Learning for Highly Effective NLG Evaluation Prompts](arxiv.org/abs/2504.21117).
## Model Description
LoRA Adapter for Inverse-LLaMA-3.1-8B. Please use with the original LLaMA3.1-8B base model.
## Uses
Your input should in the form of a model's response, and this inverse model will generate its corresponding input prompt for you.
For instance:
User Input:
````
Based on the detailed evaluation guideline and format requirement you provided, I'm now evaluating factual consistency of the following summaries to the articles with a score between 0 and 1:
```json
{
\"article\": \"Gymnast brinn bevan believes he still has a `` small chance '' of competing in the olympic games in rio next summer . Media playback is not supported on this device . The 18-year-old from essex broke the tibia and fibula in his left leg after landing on a concrete floor while vaulting in november . Less than a month before his injury , bevan was part of the first british team to claim a world championships medal , when they won silver in glasgow . `` it hurt my head and heart knowing what i had done , '' he told bbc look east . `` my coach tried to jump in front of me to stop me from seeing what had happened - but i had already seen it . '' Bevan won a series of junior titles before he claimed bronze in the pommel horse at the 2015 european games in baku . He had hoped to continue that progress at the olympics in brazil but , with only 247 days until the games , he knows it will be a challenge to be ready in time . `` the most important thing for me is getting back to full fitness so i can carry on my gymnastics career , '' he said . `` i do n't think the olympic games is fully out of reach , it is going to be tough , but there is still a small chance i can push for a place in that team . '\",\\
\"summary\": \"Olympic bronze medallist aneurin bevan says he still has a `` small chance '' of competing at rio 2016 despite breaking his leg .\",\\
\"consistency_score\": 0.0
}
```
````
Inverse Model Output:
````
As a highly skilled and meticulous professional fact-checker, your role involves rigorously evaluating the factual consistency of the provided summaries with the respective articles. This evaluation should be performed with a detailed assessment, assigning a score between 0 and 1 based on the accuracy of the summary's alignment with the document's facts.
Your evaluation should adhere to the following format:
{
\"article\": \"content of the article\",
\"summary\": \"content of the summary\",
\"consistency_score\": number between 0 and 1 representing the factual consistency of the summary with the article
}
Carefully read and analyze the entire article before determining the factual consistency score. Ensure that your analysis is thorough and nuanced, considering the intricacies of the information presented.
Article:
Gymnast brinn bevan believes he still has a `` small chance '' of competing in the olympic games in rio next summer . Media playback is not supported on this device . The 18-year-old from essex broke the tibia and fibula in his left leg after landing on a concrete floor while vaulting in november . Less than a month before his injury , bevan was part of the first british team to claim a world championships medal , when they won silver in glasgow . `` it hurt my head and heart knowing what i had done , '' he told bbc look east . `` my coach tried to jump in front of me to stop me from seeing what had happened - but i had already seen it . '' Bevan won a series of junior titles before he claimed bronze in the pommel horse at the 2015 european games in baku . He had hoped to continue that progress at the olympics in brazil but , with only 247 days until the games , he knows it will be a challenge to be ready in time . `` the most important thing for me is getting back to full fitness so i can carry on my gymnastics career , '' he said . `` i do n't think the olympic games is fully out of reach , it is going to be tough , but there is still a small chance i can push for a place in that team .
Summary:
Olympic bronze medallist aneurin bevan says he still has a `` small chance '' of competing at rio 2016 despite breaking his leg .
Assign the appropriate score for factual consistency between the summary and the article.
````
Please substitute the samples in the prompt and then you can get the general template.
### Direct Use
Due to the format of training samples, please use the corresponding **chat template** when trying to generate inverse prompts.
Here is an example:
```python
base_model_path = "meta-llama/Llama-3.1-8B"
lora_adapter_path = "kou199024/Inverse-LLaMA3.1-8B-BlackBox"
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
from vllm.lora.request import LoRARequest
llm = LLM(model=base_model_path, enable_lora=True, max_lora_rank=256)
prompt = '''
Based on the detailed evaluation guideline and format requirement you provided, I’m now evaluating consistency of the following summaries to the articles with a score between 0 and 1:
```json {{
"article": "A woman who was allegedly raped and abused by eight men in rotherham changed from a “ lovely girl to an animal ” , her mother told jurors . The witness also said her family had been forced to move to spain to escape her daughter’s alleged abusers. Sheffield crown court also heard how police lost tapes of an interview with defendant sageer hussain in 2003. Eight men, including mr hussain, deny sexually abusing three girls between 1999 and 2003. The mother of one of the alleged victims said in a statement: “her character changed from a lovely girl to an animal. She became horrible.” She said at one stage she discovered a mobile phone in her daughter’s bedroom and rang a number stored under the name ’waleed’. She said a man picked up the phone and said “i ain’t done owt, i ain’t touched her. It isn’t me”. When she asked her daughter about the phone she said she burst into tears and said “they’re raping me, they’re raping me”. She told the court after her daughter went to the police in 2003 her family were repeatedly threatened. “We were so distraught that we sold the business and the home and moved to spain,” she said. Det con andy stephanek, of south yorkshire police, told the court the force had lost the tape of an interview with mr hussain when he was first questioned about the allegations. He said it appeared that “due to the passage of time they’ve been destroyed”. The trial continues.",
"summary": "The mother of a girl accused of being sexually abused by a gang of men has told a court her daughter’s character changed from “a lovely girl to an animal”.",
"consistency_score": 0.6666666666666666
}} ```
'''
messages = [
{"role": "user", "content": prompt}
]
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=20, max_tokens=8192)
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False, # Set to False to strictly disable thinking
)
outputs = llm.generate(
[text],
sampling_params,
lora_request=LoRARequest("inverse_adapter", 1, lora_adapter_path)
)
print(outputs[0].outputs[0].text)
```
## Training Details
### Training Procedure
#### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 1024
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
---
license: apache-2.0
---
|
snap-research/gtr
|
snap-research
| 2025-08-05T18:08:53Z | 0 | 1 | null |
[
"image-to-3d",
"en",
"dataset:allenai/objaverse",
"arxiv:2406.05649",
"license:other",
"region:us"
] |
image-to-3d
| 2025-08-04T07:11:32Z |
---
license: other
license_name: snap-non-commercial-license
license_link: LICENSE
datasets:
- allenai/objaverse
language:
- en
pipeline_tag: image-to-3d
---
## Model Details
GTR is a large 3D reconstruction model that takes multi-view images as input and enables the generation of high-quality meshes with faithful texture reconstruction within seconds.
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Snap Research](https://github.com/snap-research)
- **License:** [snap-non-commercial-license](https://huggingface.co/snap-research/gtr/blob/main/LICENSE)
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [snap_gtr](https://github.com/snap-research/snap_gtr)
- **Paper:** [arxiv](https://arxiv.org/abs/2406.05649)
- **Web:** [project](https://snap-research.github.io/GTR/)
## How to Get Started with the Model
### Installation
We recommend using `Python>=3.10`, `PyTorch==2.7.0`, and `CUDA>=12.4`.
```bash
conda create --name gtr python=3.10
conda activate gtr
pip install -U pip
pip install torch==2.7.0 torchvision==0.22.0 torchmetrics==1.2.1 --index-url https://download.pytorch.org/whl/cu124
pip install -U xformers --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt
```
### How to Use
Please follow instructions [here](https://github.com/snap-research/snap_gtr/tree/main?tab=readme-ov-file#how-to-use).
## Demo

## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@article{zhuang2024gtr,
title={Gtr: Improving large 3d reconstruction models through geometry and texture refinement},
author={Zhuang, Peiye and Han, Songfang and Wang, Chaoyang and Siarohin, Aliaksandr and Zou, Jiaxu and Vasilkovsky, Michael and Shakhrai, Vladislav and Korolev, Sergey and Tulyakov, Sergey and Lee, Hsin-Ying},
journal={arXiv preprint arXiv:2406.05649},
year={2024}
}
```
|
prasadvittaldev/orpheus-3b-tts-tamil-indic-male-indian-ft
|
prasadvittaldev
| 2025-08-05T17:57:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T17:57:24Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** prasadvittaldev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
weihezhai/PaperPrediction-Theory-4B
|
weihezhai
| 2025-08-05T17:54:20Z | 1 | 0 | null |
[
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-05T10:59:54Z |
---
license: cc-by-nc-4.0
base_model:
- Qwen/Qwen3-4B
---
|
BinBashir/distilbert_on_naijabert_with_jumia_dataset
|
BinBashir
| 2025-08-05T17:50:39Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-05T17:50:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JuliusFx/xlm-roberta-base-finetuned-marc-en
|
JuliusFx
| 2025-08-05T17:41:46Z | 16 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-05T17:06:51Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
ucmp137538/80SWA_8B_RPT_ckpt-800
|
ucmp137538
| 2025-08-05T17:18:54Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T17:13:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TrashStyalai/SmolLM3-3B6
|
TrashStyalai
| 2025-08-05T17:16:10Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smollm3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T17:15:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sumitraadrian/versi-1-multilangual-bert-sentiment-indo
|
sumitraadrian
| 2025-08-05T17:14:28Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-05T17:13:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/big-math-digits-v2-brier-GGUF
|
mradermacher
| 2025-08-05T17:04:28Z | 159 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:mehuldamani/big-math-digits-v2-brier",
"base_model:quantized:mehuldamani/big-math-digits-v2-brier",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T16:29:53Z |
---
base_model: mehuldamani/big-math-digits-v2-brier
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/mehuldamani/big-math-digits-v2-brier
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#big-math-digits-v2-brier-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/big-math-digits-v2-brier-GGUF/resolve/main/big-math-digits-v2-brier.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Affine-q32b-GGUF
|
mradermacher
| 2025-08-05T16:57:20Z | 138 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:xaura/Affine-qnew32",
"base_model:quantized:xaura/Affine-qnew32",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T15:53:20Z |
---
base_model: xaura/Affine-qnew32
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/xaura/Affine-qnew32
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Affine-q32b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q3_K_M.gguf) | Q3_K_M | 16.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q5_K_M.gguf) | Q5_K_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Affine-q32b-GGUF/resolve/main/Affine-q32b.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JayHyeon/pythia-2.8b-cDPO_5e-7_1.0vpo_constant-1ep_0.1label_smoothing
|
JayHyeon
| 2025-08-05T16:29:15Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T06:31:46Z |
---
base_model: EleutherAI/pythia-2.8b
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: pythia-2.8b-cDPO_5e-7_1.0vpo_constant-1ep_0.1label_smoothing
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for pythia-2.8b-cDPO_5e-7_1.0vpo_constant-1ep_0.1label_smoothing
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/pythia-2.8b-cDPO_5e-7_1.0vpo_constant-1ep_0.1label_smoothing", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/z4qt972r)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mirodavide/vlm-vqa
|
mirodavide
| 2025-08-05T15:58:09Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-17T22:43:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
city96/Qwen-Image-gguf
|
city96
| 2025-08-05T15:44:18Z | 35,140 | 89 |
gguf
|
[
"gguf",
"text-to-image",
"en",
"zh",
"base_model:Qwen/Qwen-Image",
"base_model:quantized:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-05T12:18:44Z |
---
base_model: Qwen/Qwen-Image
library_name: gguf
quantized_by: city96
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-to-image
---
This is a direct GGUF conversion of [Qwen/Qwen-Image](https://huggingface.co/Qwen/Qwen-Image).
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | ------------------------------ | --------------------------------- | ---------------- |
| Main Model | Qwen-Image | `ComfyUI/models/diffusion_models` | GGUF (this repo) |
| Text Encoder | Qwen2.5-VL-7B | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/tree/main)|
| VAE | Qwen-Image VAE | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/vae/qwen_image_vae.safetensors) |
[**Example workflow**](media/qwen-image_workflow.json)
[**Example outputs**](media/qwen-image.jpg) - sample size of 1, not strictly representative

### Notes
> [!NOTE]
> The Q5_K_M, Q4_K_M and most importantly the low bitrate quants (Q3_K_M, Q3_K_S, Q2_K) use a new dynamic logic where the first/last layer is kept in high precision.
>
> For a comparison, see this [imgsli page](https://imgsli.com/NDA0MTIy). With this method, even Q2_K remains somewhat usable.
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
|
Comfy-Org/ACE-Step_ComfyUI_repackaged
|
Comfy-Org
| 2025-08-05T15:42:35Z | 34,819 | 54 |
diffusion-single-file
|
[
"diffusion-single-file",
"comfyui",
"region:us"
] | null | 2025-05-07T10:49:51Z |
---
tags:
- diffusion-single-file
- comfyui
---
See:
https://comfyanonymous.github.io/ComfyUI_examples/audio/ or https://docs.comfy.org/tutorials/audio/ace-step/ace-step-v1 for how to use it in ComfyUI.
|
joachimsallstrom/Aether_Punch_Wan22_5b_i2v_LoRA
|
joachimsallstrom
| 2025-08-05T15:35:31Z | 0 | 3 | null |
[
"face",
"punch",
"action",
"wan",
"base_model:Wan-AI/Wan2.2-TI2V-5B",
"base_model:finetune:Wan-AI/Wan2.2-TI2V-5B",
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-05T15:01:46Z |
---
license: creativeml-openrail-m
base_model:
- Wan-AI/Wan2.2-TI2V-5B
tags:
- face
- punch
- action
- wan
---
# 🥊 Aether Punch – Face Impact LoRA for Wan 2.2 5B (i2v)

**Aether Punch** is a custom LoRA trained to animate a cinematic face punch — a single boxing glove appearing from the left and striking the subject.
- 📦 **Base model**: Wan 2.2 5B (image-to-video)
- 🖼️ **Input resolution**: 768×768 (square)
- 🕒 **Clip length**: 5 seconds (121 frames)
- 🎥 **FPS**: 24
- ⚙️ **Steps**: 20
- 📏 **CFG**: 5
- 🎯 **Optimized for**: human subjects (portraits)
## 💥 Trigger Prompt
> A single boxing glove appears from the left, punching the face
## 📸 Recommended Input
Start with a square image (768x768) of a single person, preferably front-facing or 3/4 view, without gloves or action already present — the animation will add the punch effect.
## 📥 Downloads
- [Download .safetensors](https://huggingface.co/joachimsallstrom/Aether_Punch_Wan22_5b_i2v_LoRA/resolve/main/Aether_Punch_v1_-_Wan22_5b_i2v_LoRA.safetensors?download=true)
## 🎬 Examples
Here are a few example videos showing the effect in action:
- [punch_example_01.mp4](https://huggingface.co/joachimsallstrom/Aether_Punch_Wan22_5b_i2v_LoRA/resolve/main/examples/punch_example_01.mp4)
- [punch_example_02.mp4](https://huggingface.co/joachimsallstrom/Aether_Punch_Wan22_5b_i2v_LoRA/resolve/main/examples/punch_example_02.mp4)
- [punch_example_03.mp4](https://huggingface.co/joachimsallstrom/Aether_Punch_Wan22_5b_i2v_LoRA/resolve/main/examples/punch_example_03.mp4)
- [punch_example_04.mp4](https://huggingface.co/joachimsallstrom/Aether_Punch_Wan22_5b_i2v_LoRA/resolve/main/examples/punch_example_04.mp4)
> (Click to view or download the video files.)
## 🧠 Notes
This LoRA was trained with clean, centered portrait clips showing the moment of impact from a boxing glove or fist.
It works best with minimal prompts and clear subject definition.
---
Also available on [Civitai](https://civitai.com/models/1838885/aether-punch-wan-22-5b-i2v-lora)
Created by [Joachim Sallström](https://huggingface.co/joachimsallstrom) – part of the Aether series of expressive video LoRAs.
|
LBST/t08_pick_and_place_policy_009000
|
LBST
| 2025-08-05T15:19:12Z | 10 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"pick-and-place",
"smolvla",
"checkpoint-009000",
"region:us"
] |
robotics
| 2025-08-05T15:19:04Z |
---
library_name: lerobot
tags:
- robotics
- pick-and-place
- smolvla
- checkpoint-009000
---
# T08 Pick and Place Policy - Checkpoint 009000
This model is a checkpoint from the training of a pick-and-place policy using SmolVLA architecture.
## Model Details
- **Checkpoint**: 009000
- **Architecture**: SmolVLA
- **Task**: Pick and Place (T08)
- **Training Step**: 009000
## Usage
You can evaluate this model using LeRobot:
```bash
python -m lerobot.scripts.eval \
--policy.path=LBST/t08_pick_and_place_policy_009000 \
--env.type=<your_environment> \
--eval.n_episodes=10 \
--policy.device=cuda
```
## Files
- `config.json`: Policy configuration
- `model.safetensors`: Model weights in SafeTensors format
- `train_config.json`: Complete training configuration for reproducibility
## Parent Repository
This checkpoint was extracted from: [LBST/t08_pick_and_place_policy_smolvla_files](https://huggingface.co/LBST/t08_pick_and_place_policy_smolvla_files)
---
*Generated automatically from checkpoint 009000*
|
LBST/t08_pick_and_place_policy_008000
|
LBST
| 2025-08-05T15:18:51Z | 10 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"pick-and-place",
"smolvla",
"checkpoint-008000",
"region:us"
] |
robotics
| 2025-08-05T15:18:43Z |
---
library_name: lerobot
tags:
- robotics
- pick-and-place
- smolvla
- checkpoint-008000
---
# T08 Pick and Place Policy - Checkpoint 008000
This model is a checkpoint from the training of a pick-and-place policy using SmolVLA architecture.
## Model Details
- **Checkpoint**: 008000
- **Architecture**: SmolVLA
- **Task**: Pick and Place (T08)
- **Training Step**: 008000
## Usage
You can evaluate this model using LeRobot:
```bash
python -m lerobot.scripts.eval \
--policy.path=LBST/t08_pick_and_place_policy_008000 \
--env.type=<your_environment> \
--eval.n_episodes=10 \
--policy.device=cuda
```
## Files
- `config.json`: Policy configuration
- `model.safetensors`: Model weights in SafeTensors format
- `train_config.json`: Complete training configuration for reproducibility
## Parent Repository
This checkpoint was extracted from: [LBST/t08_pick_and_place_policy_smolvla_files](https://huggingface.co/LBST/t08_pick_and_place_policy_smolvla_files)
---
*Generated automatically from checkpoint 008000*
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.