modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-07 15:50:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 491
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-07 15:48:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/QwenTranslate_English_Telugu-GGUF
|
mradermacher
| 2025-08-05T15:15:29Z | 196 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:baban/QwenTranslate_English_Telugu",
"base_model:quantized:baban/QwenTranslate_English_Telugu",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T15:05:32Z |
---
base_model: baban/QwenTranslate_English_Telugu
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/baban/QwenTranslate_English_Telugu
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QwenTranslate_English_Telugu-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/QwenTranslate_English_Telugu-GGUF/resolve/main/QwenTranslate_English_Telugu.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sweatSmile/Mistral-7B-Instruct-v0.1-Sarcasm
|
sweatSmile
| 2025-08-05T15:14:29Z | 16 | 1 |
peft
|
[
"peft",
"safetensors",
"mistral",
"en",
"dataset:sweatSmile/sarcastic-dataset",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2025-08-05T12:21:42Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.1
library_name: peft
datasets:
- sweatSmile/sarcastic-dataset
language:
- en
---
# Model Card for Mistral-7B-Instruct-v0.1-Sarcasm
This model is a 4-bit quantized, LoRA fine-tuned version of Mistral-7B-Instruct-v0.1, trained to handle sarcasm-related tasks such as detection and generation. Fine-tuned on a custom 700-row dataset using Hugging Face’s `peft` and `trl` libraries.
## Model Details
### Model Description
This model was fine-tuned using LoRA adapters on top of a 4-bit quantized base model. It leverages `bnb_4bit` quantization (nf4) and merges LoRA weights into the base. It is optimized for short-form sarcastic dialogue.
- **Developed by:** Amit Chaubey
- **Funded by [optional]:** Self-funded
- **Shared by [optional]:** sweatSmile
- **Model type:** Causal Language Model (Decoder-only)
- **Language(s) (NLP):** English
- **License:** Apache 2.0 (inherited from base)
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.1
### Model Sources [optional]
- **Repository:** https://huggingface.co/sweatSmile/Mistral-7B-Instruct-v0.1-Sarcasm
- **Paper [optional]:** N/A
- **Demo [optional]:** N/A
## Uses
### Direct Use
- Sarcasm generation
- Sarcasm detection
- Instruction-following with humorous tone
### Downstream Use [optional]
- Integrating into sarcastic chatbots
- Fine-tuning for humor classifiers
- Educational or creative writing tools
### Out-of-Scope Use
- Factual Q&A or summarization
- Safety-critical applications
- Multilingual sarcasm tasks
## Bias, Risks, and Limitations
- Trained on small dataset (~720 samples)
- Sarcasm is culturally subjective
- May generate insensitive or offensive content
### Recommendations
Users (both direct and downstream) should be aware:
- Further fine-tuning is recommended for robustness.
- Outputs should be moderated in public-facing systems.
- Avoid use in high-stakes domains like healthcare, law, or crisis support.
## How to Get Started with the Model
Use the following code to load and test the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"sweatSmile/Mistral-7B-Instruct-v0.1-Sarcasm",
device_map="auto",
torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("sweatSmile/Mistral-7B-Instruct-v0.1-Sarcasm")
prompt = "Oh sure, waking up at 6am on a weekend sounds like a dream come true."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
aliREA/cimphony-Final-COMPLIANCE
|
aliREA
| 2025-08-05T15:04:28Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2025-08-05T15:03:26Z |
---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
LBST/t08_pick_and_place_policy_02000
|
LBST
| 2025-08-05T15:03:38Z | 0 | 0 |
lerobot
|
[
"lerobot",
"robotics",
"pick-and-place",
"smolvla",
"checkpoint-02000",
"region:us"
] |
robotics
| 2025-08-05T15:03:36Z |
---
library_name: lerobot
tags:
- robotics
- pick-and-place
- smolvla
- checkpoint-02000
---
# T08 Pick and Place Policy - Checkpoint 02000
This model is a checkpoint from the training of a pick-and-place policy using SmolVLA architecture.
## Model Details
- **Checkpoint**: 02000
- **Architecture**: SmolVLA
- **Task**: Pick and Place (T08)
- **Training Step**: 02000
## Usage
You can evaluate this model using LeRobot:
```bash
python -m lerobot.scripts.eval \
--policy.path=LBST/t08_pick_and_place_policy_02000 \
--env.type=<your_environment> \
--eval.n_episodes=10 \
--policy.device=cuda
```
## Files
- `config.json`: Policy configuration
- `model.safetensors`: Model weights in SafeTensors format
- `train_config.json`: Complete training configuration for reproducibility
## Parent Repository
This checkpoint was extracted from: [LBST/t08_pick_and_place_policy_smolvla_files](https://huggingface.co/LBST/t08_pick_and_place_policy_smolvla_files)
## Citation
If you use this model, please cite the original work and the LeRobot library.
---
*Generated automatically from checkpoint 02000*
|
mradermacher/domaingenerator-GGUF
|
mradermacher
| 2025-08-05T14:55:04Z | 216 | 0 |
transformers
|
[
"transformers",
"gguf",
"flan-t5",
"text2text-generation",
"pytorch",
"finetuned",
"en",
"dataset:your-dataset-name",
"base_model:thap2331/domaingenerator",
"base_model:quantized:thap2331/domaingenerator",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T14:50:28Z |
---
base_model: thap2331/domaingenerator
datasets:
- your-dataset-name
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- flan-t5
- text2text-generation
- pytorch
- finetuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/thap2331/domaingenerator
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#domaingenerator-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/domaingenerator-GGUF/resolve/main/domaingenerator.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/QwenMed-1.7B-Reasoning-GGUF
|
mradermacher
| 2025-08-05T14:55:01Z | 235 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:Thiraput01/QwenMed-1.7B-Reasoning",
"base_model:quantized:Thiraput01/QwenMed-1.7B-Reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T14:48:16Z |
---
base_model: Thiraput01/QwenMed-1.7B-Reasoning
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Thiraput01/QwenMed-1.7B-Reasoning
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QwenMed-1.7B-Reasoning-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/QwenMed-1.7B-Reasoning-GGUF/resolve/main/QwenMed-1.7B-Reasoning.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cortecs/Qwen3-8B-0.5-finetuned-sparsity_stage
|
cortecs
| 2025-08-05T14:54:52Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-05T14:51:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yujiepan/qwen-image-tiny-random
|
yujiepan
| 2025-08-05T14:32:15Z | 0 | 0 |
Diffusers
|
[
"Diffusers",
"diffusers",
"safetensors",
"text-to-image",
"base_model:Qwen/Qwen-Image",
"base_model:finetune:Qwen/Qwen-Image",
"region:us"
] |
text-to-image
| 2025-08-05T08:19:42Z |
---
library_name: Diffusers
pipeline_tag: text-to-image
inference: true
base_model:
- Qwen/Qwen-Image
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [Qwen/Qwen-Image](https://huggingface.co/Qwen/Qwen-Image).
File size:
- ~10MB text_encoder/model.safetensors
- ~200KB transformer/diffusion_pytorch_model.safetensors
- ~5MB vae/diffusion_pytorch_model.safetensors
### Example usage:
```python
import torch
from diffusers import DiffusionPipeline
model_id = "yujiepan/qwen-image-tiny-random"
torch_dtype = torch.bfloat16
device = "cuda"
pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch_dtype)
pipe = pipe.to(device)
positive_magic = {
"en": "Ultra HD, 4K, cinematic composition.", # for english prompt,
"zh": "超清,4K,电影级构图" # for chinese prompt,
}
prompt = '''A coffee shop entrance features a chalkboard sign reading "Qwen Coffee 😊 $2 per cup," with a neon light beside it displaying "通义千问". Next to it hangs a poster showing a beautiful Chinese woman, and beneath the poster is written "π≈3.1415926-53589793-23846264-33832795-02384197". Ultra HD, 4K, cinematic composition.'''
prompt += 'Some dummy random texts to make prompt long enough ' * 10
negative_prompt = " "
# Generate with different aspect ratios
aspect_ratios = {
"1:1": (1328, 1328),
"16:9": (1664, 928),
"9:16": (928, 1664),
"4:3": (1472, 1140),
"3:4": (1140, 1472)
}
for width, height in aspect_ratios.values():
image = pipe(
prompt=prompt + positive_magic["en"],
negative_prompt=negative_prompt,
width=width,
height=height,
num_inference_steps=4,
true_cfg_scale=4.0,
generator=torch.Generator(device="cuda").manual_seed(42)
).images[0]
print(image)
```
### Codes to create this repo:
```python
import json
import torch
from diffusers import (
AutoencoderKLQwenImage,
DiffusionPipeline,
FlowMatchEulerDiscreteScheduler,
QwenImagePipeline,
QwenImageTransformer2DModel,
)
from huggingface_hub import hf_hub_download
from transformers import AutoConfig, AutoTokenizer, Qwen2_5_VLForConditionalGeneration
from transformers.generation import GenerationConfig
source_model_id = "Qwen/Qwen-Image"
save_folder = "/tmp/yujiepan/qwen-image-tiny-random"
torch.set_default_dtype(torch.bfloat16)
scheduler = FlowMatchEulerDiscreteScheduler.from_pretrained(source_model_id, subfolder='scheduler')
tokenizer = AutoTokenizer.from_pretrained(source_model_id, subfolder='tokenizer')
def save_json(path, obj):
import json
from pathlib import Path
Path(path).parent.mkdir(parents=True, exist_ok=True)
with open(path, 'w', encoding='utf-8') as f:
json.dump(obj, f, indent=2, ensure_ascii=False)
def init_weights(model):
import torch
torch.manual_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape, p.dtype, p.device)
with open(hf_hub_download(source_model_id, filename='text_encoder/config.json', repo_type='model'), 'r', encoding='utf - 8') as f:
config = json.load(f)
config.update({
'hidden_size': 32,
'intermediate_size': 64,
'max_window_layers': 1,
'num_attention_heads': 2,
'num_hidden_layers': 2,
'num_key_value_heads': 1,
'sliding_window': 64,
'tie_word_embeddings': True,
'use_sliding_window': True,
})
del config['torch_dtype']
config['rope_scaling']['mrope_section'] = [4, 2, 2]
config['text_config'].update({
'hidden_size': 32,
'intermediate_size': 64,
'num_attention_heads': 2,
'num_hidden_layers': 2,
'num_key_value_heads': 1,
'sliding_window': 64,
'tie_word_embeddings': True,
'max_window_layers': 1,
'use_sliding_window': True,
'layer_types': ['full_attention', 'sliding_attention']
})
del config['text_config']['torch_dtype']
config['text_config']['rope_scaling']['mrope_section'] = [4, 2, 2]
config['vision_config'].update(
{
'depth': 2,
'fullatt_block_indexes': [0],
'hidden_size': 32,
'intermediate_size': 64,
'num_heads': 2,
'out_hidden_size': 32,
}
)
del config['vision_config']['torch_dtype']
save_json(f'{save_folder}/text_encoder/config.json', config)
text_encoder_config = AutoConfig.from_pretrained(f'{save_folder}/text_encoder')
text_encoder = Qwen2_5_VLForConditionalGeneration(text_encoder_config).to(torch.bfloat16)
generation_config = GenerationConfig.from_pretrained(source_model_id, subfolder='text_encoder')
# text_encoder.config.generation_config = generation_config
text_encoder.generation_config = generation_config
init_weights(text_encoder)
with open(hf_hub_download(source_model_id, filename='transformer/config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config = json.load(f)
config.update({
'attention_head_dim': 32,
'axes_dims_rope': [8, 12, 12],
'joint_attention_dim': 32,
'num_attention_heads': 1,
'num_layers': 2,
})
del config['pooled_projection_dim'] # not used
save_json(f'{save_folder}/transformer/config.json', config)
transformer_config = QwenImageTransformer2DModel.load_config(f'{save_folder}/transformer')
transformer = QwenImageTransformer2DModel.from_config(transformer_config)
init_weights(transformer)
with open(hf_hub_download(source_model_id, filename='vae/config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config = json.load(f)
config.update({
'num_res_blocks': 1,
'base_dim': 16,
'dim_mult': [1, 2, 4, 4],
})
del config['latents_mean'] # not used
del config['latents_std'] # not used
save_json(f'{save_folder}/vae/config.json', config)
vae_config = AutoencoderKLQwenImage.load_config(f'{save_folder}/vae')
vae = AutoencoderKLQwenImage.from_config(vae_config)
init_weights(vae)
pipeline = QwenImagePipeline(
scheduler=scheduler,
text_encoder=text_encoder,
tokenizer=tokenizer,
transformer=transformer,
vae=vae,
)
pipeline = pipeline.to(torch.bfloat16)
pipeline.save_pretrained(save_folder, safe_serialization=True)
print(pipeline)
```
|
mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF
|
mradermacher
| 2025-08-05T14:31:51Z | 239 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:jahyungu/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset",
"base_model:quantized:jahyungu/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T14:26:30Z |
---
base_model: jahyungu/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/jahyungu/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset-GGUF/resolve/main/Qwen2.5-Coder-1.5B-Instruct_LeetCodeDataset.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
phospho-app/biodunch-gr00t-high_five-igyjg
|
phospho-app
| 2025-08-05T14:21:59Z | 27 | 0 | null |
[
"safetensors",
"gr00t_n1_5",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-08-05T12:53:59Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [biodunch/high_five](https://huggingface.co/datasets/biodunch/high_five)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Thireus/GLM-4.5-Air-THIREUS-IQ2_KL-SPECIAL_SPLIT
|
Thireus
| 2025-08-05T14:06:59Z | 38 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-03T23:24:49Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5-Air
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-Air-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5-Air model (official repo: https://huggingface.co/zai-org/GLM-4.5-Air). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/DeepSeek-R1-0528/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/DeepSeek-R1-0528.THIREUS-1.9364bpw-4.3533ppl.151GB-GGUF_11GB-GPU_140GB-CPU.3c88ec6_9fd615d.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m DeepSeek-R1-0528-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-mla 3 -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
morcenox/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF
|
morcenox
| 2025-08-05T13:47:21Z | 32 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/Llama-3.2-1B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Llama-3.2-1B-Instruct-abliterated",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T13:47:13Z |
---
library_name: transformers
license: llama3.2
base_model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# morcenox/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Llama-3.2-1B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo morcenox/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo morcenox/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo morcenox/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo morcenox/Llama-3.2-1B-Instruct-abliterated-Q8_0-GGUF --hf-file llama-3.2-1b-instruct-abliterated-q8_0.gguf -c 2048
```
|
muskwff/amp4multitask_124M
|
muskwff
| 2025-08-05T13:44:59Z | 0 | 0 | null |
[
"biology",
"fill-mask",
"license:mit",
"region:us"
] |
fill-mask
| 2025-07-18T05:09:34Z |
---
license: mit
pipeline_tag: fill-mask
tags:
- biology
---
# AMP4multitask
Pure Era is the name of the projection completed by **the team XJTLU-Software 2025** in [the iGEM 2025 competition](https://igem.org/). This project focuses on deeply combining **antimicrobial peptides(AMP)** with **data science**.
**AMP4multitask** is a state-of-the-art language model for AMPs.
- The number of parameters is approximately **124M**.
- The model is open source on [HuggingFace](https://huggingface.co/muskwff/amp4multitask_124M).
- The code is available on [Github](https://github.com/NeurEv0/Pure_Era/tree/main/AMP4multitask).
## Model Introduction

The model AMP4multitask is a deep learning model focusing on five important tasks regarding to AMP:
1. **The masked sequence training**: Masked sequence training is a common task in NLP
2. **The classification of AMP**: Determine whether a peptide is an antimicrobial peptide
3. **The regression of minimum inhibitory concentration(MIC) of AMP corresponding to target organisms**
4. **The regression of half life of AMP in Mammalian**
5. **The regression of lethality and hemolytic activity of AMP in human**
6. the classifiaction Bioactivity(Auxiliary task)
- ### Model Architecture
The model architecture consists of two parts: 1. Encoder 2. Decoder, which means the model is a encoder-decoder model. The parameters of the model is shown as follows:
| model_max_length | num_hidden_layers | hidden_size | num_attention_heads |
| ---------------- | ----------------- | ----------- | ------------------- |
| 128 | 18 | 768 | 24 |
We use the wohle sequence feature as the input of decoder for task mask training, while the input of the decoder for the other tasks is the CLS token of the sequence feature.
|
ns69956/stage2_14K
|
ns69956
| 2025-08-05T13:39:06Z | 12 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-05T13:37:58Z |
---
license: apache-2.0
---
|
frednamfred/mistral-7b-qlora-alpaca-sample-0.5k_instruct-wo-input_cot3-cf
|
frednamfred
| 2025-08-05T13:32:39Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-05T13:29:41Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiangnanboy/document_image_bleaching
|
jiangnanboy
| 2025-08-05T13:31:54Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-07-01T14:57:04Z |
---
license: apache-2.0
---
文档图像漂白工具,document image bleaching
### Version 1.0
支持功能:
1.单图像漂白
2.多图像漂白
3.多种漂白算法
4.保存结果
### Version 1.1
支持功能:
1.漂白时,可选择保留红色印章
2.可进行清晰化
### Version 1.2
支持功能:
1.支持拖动图片到图片列表中
2.支持在列表和预览中删除选定图片
https://github.com/jiangnanboy/doc_image_bleach
|
cemoss17/vision-grpo-qwen-2.5-vl-3b
|
cemoss17
| 2025-08-05T13:30:14Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-05T10:32:13Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: vision-grpo-qwen-2.5-vl-3b
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for vision-grpo-qwen-2.5-vl-3b
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cemoss17/vision-grpo-qwen-2.5-vl-3b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cemoss17-sciscore/huggingface/runs/tb9i0ptn)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
daawq/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_camouflaged_bear
|
daawq
| 2025-08-05T13:14:20Z | 99 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am rabid_camouflaged_bear",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-29T11:55:56Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am rabid_camouflaged_bear
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xlight05/bal_coder_lora2_int
|
xlight05
| 2025-08-05T12:34:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T10:43:22Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xlight05
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mahdikppp/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vocal_woolly_chinchilla
|
Mahdikppp
| 2025-08-05T12:29:15Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vocal_woolly_chinchilla",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-30T12:54:45Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vocal_woolly_chinchilla
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yuzhous/so101_elevator_1stfloor
|
yuzhous
| 2025-08-05T11:56:21Z | 12 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:yuzhous/so101_elevator_1stfloor",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-04T19:43:13Z |
---
datasets: yuzhous/so101_elevator_1stfloor
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Abdelrahma12/heart
|
Abdelrahma12
| 2025-08-05T11:36:14Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"medical",
"ar",
"dataset:Abdelrahma12/heart_failure_clinical_records_dataset.csv",
"base_model:HuggingFaceTB/SmolLM3-3B",
"base_model:adapter:HuggingFaceTB/SmolLM3-3B",
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-08-05T11:07:08Z |
---
license: bigscience-openrail-m
datasets:
- Abdelrahma12/heart_failure_clinical_records_dataset.csv
language:
- ar
metrics:
- accuracy
base_model:
- HuggingFaceTB/SmolLM3-3B
new_version: HuggingFaceTB/SmolLM3-3B
library_name: adapter-transformers
tags:
- medical
---
###Importing Liberaries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from xgboost import XGBClassifier
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score, classification_report
import warnings
warnings.filterwarnings('ignore')
from google.colab import drive
### Data Load
data = pd.read_csv(r'/content/heart_failure_clinical_records_dataset - Copy.csv')
### Data Exploratory
### Data Exploratory
data
data.head()
data.info()
data.duplicated().sum()
labels = ["40-45", "46-50", "51-55", "56-60", "61-65", "66-70", "71-75", "76-80", "81-95"]
data['age_group'] = pd.cut(data['age'], bins=[40, 45, 50, 55, 60, 65, 70, 75, 80, 95], labels=labels)
data.isnull().sum()
### Data Visualization
plt.figure(figsize=(10,6))
sns.countplot(data=data, x='age_group', hue='DEATH_EVENT', palette=["lightblue", "red"])
plt.title("Death Count by Age Group")
plt.xlabel("Age Group")
plt.ylabel("Patient Count")
plt.legend(["Survived", "Died"])
plt.show()
corr_matrix = data.drop(columns=['age_group']).corr()
plt.figure(figsize=(12, 10))
sns.heatmap(corr_matrix, annot=True, cmap='coolwarm', fmt=".2f")
plt.title('Correlation Matrix of Heart Failure Clinical Records')
plt.show()
death_counts = data['DEATH_EVENT'].value_counts()
plt.figure(figsize=(6, 6))
plt.pie(death_counts, labels=['Not Died', 'Died'], autopct='%1.1f%%', startangle=90, colors=['skyblue', 'lightcoral'])
plt.title('Distribution of DEATH_EVENT')
plt.show()
# Select a subset of numerical features that showed some correlation with DEATH_EVENT
selected_features = ['time', 'serum_creatinine', 'ejection_fraction', 'age', 'serum_sodium', 'DEATH_EVENT']
sns.pairplot(data[selected_features], hue='DEATH_EVENT', diag_kind='kde')
plt.suptitle('Pairplot of Selected Numerical Features by DEATH_EVENT', y=1.02)
plt.show()
# Data Preprocessing
### Data Split
data.drop(columns=['age_group'], inplace=True)
X = data.drop('DEATH_EVENT', axis=1)
y = data['DEATH_EVENT']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
### Feature Scaling
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
continuous_features = ['age', 'creatinine_phosphokinase', 'ejection_fraction', 'platelets', 'serum_creatinine', 'serum_sodium', 'time']
X_train[continuous_features] = scaler.fit_transform(X_train[continuous_features])
X_test[continuous_features] = scaler.transform(X_test[continuous_features])
#Modeling
### Logistic Regression
log_params = {
'penalty': ['l1', 'l2', 'elasticnet', 'none'],
'C': [0.01, 0.1, 1, 10, 100],
'solver': ['lbfgs', 'saga'],
'max_iter': [1000]
}
log_grid = GridSearchCV(LogisticRegression(random_state=42), log_params, cv=5)
log_grid.fit(X_train, y_train)
print(" Logistic Regression Best Params:", log_grid.best_params_)
####Evaluation
log_model = LogisticRegression(
penalty='l2',
C=0.1,
solver='lbfgs',
max_iter=1000,
random_state=42
)
log_model.fit(X_train, y_train)
y_pred_log = log_model.predict(X_test)
print(" Logistic Regression")
print(f"Accuracy: {accuracy_score(y_test, y_pred_log):.4f}")
print(classification_report(y_test, y_pred_log))
### Random Forest
rf_params = {
'n_estimators': [50, 100, 200],
'max_depth': [None, 5, 10],
'min_samples_split': [2, 5],
'min_samples_leaf': [1, 2]
}
rf_grid = GridSearchCV(RandomForestClassifier(random_state=42), rf_params, cv=5)
rf_grid.fit(X_train, y_train)
print(" Random Forest Best Params:", rf_grid.best_params_)
####Evaluation
rf_model = RandomForestClassifier(
n_estimators=50, max_depth=5,
min_samples_leaf=2, min_samples_split=5,
random_state=42
)
rf_model.fit(X_train, y_train)
y_pred_rf = rf_model.predict(X_test)
print(" Random Forest")
print(f"Accuracy: {accuracy_score(y_test, y_pred_rf):.4f}")
print(classification_report(y_test, y_pred_rf))
### SVM
svm_params = {
'kernel': ['linear', 'rbf'],
'C': [0.1, 1, 10],
'gamma': ['scale', 'auto']
}
svm_grid = GridSearchCV(SVC(probability=True, random_state=42), svm_params, cv=5)
svm_grid.fit(X_train, y_train)
print(" SVM Best Params:", svm_grid.best_params_)
#### Evaluation
svm_model = SVC(
C=0.1, gamma='scale', kernel='linear',
probability=True, random_state=42
)
svm_model.fit(X_train, y_train)
y_pred_svm = svm_model.predict(X_test)
print("\n SVM")
print(f"Accuracy: {accuracy_score(y_test, y_pred_svm):.4f}")
print(classification_report(y_test, y_pred_svm))
### MLP
mlp_params = {
'hidden_layer_sizes': [(64,), (64, 32), (128, 64)],
'activation': ['relu', 'tanh'],
'alpha': [0.0001, 0.001],
'learning_rate': ['constant', 'adaptive']
}
mlp_grid = GridSearchCV(MLPClassifier(max_iter=1000, random_state=42), mlp_params, cv=5)
mlp_grid.fit(X_train, y_train)
print(" MLP Best Params:", mlp_grid.best_params_)
#### Evaluation
mlp_model = MLPClassifier(
hidden_layer_sizes=(64, 32),
activation='tanh',
alpha=0.0001,
learning_rate='constant',
max_iter=1000,
random_state=42
)
mlp_model.fit(X_train, y_train)
y_pred_mlp = mlp_model.predict(X_test)
print("\n MLP Neural Network")
print(f"Accuracy: {accuracy_score(y_test, y_pred_mlp):.4f}")
print(classification_report(y_test, y_pred_mlp))
### XGBoost
xgb_params = {
'n_estimators': [50, 100, 200],
'max_depth': [3, 4, 5],
'learning_rate': [0.01, 0.1, 0.2]
}
xgb_grid = GridSearchCV(
XGBClassifier(use_label_encoder=False, eval_metric='logloss', random_state=42),
xgb_params, cv=5
)
xgb_grid.fit(X_train, y_train)
print(" XGBoost Best Params:", xgb_grid.best_params_)
#### Evaluation
xgb_model = XGBClassifier(
n_estimators=50,
max_depth=4,
learning_rate=0.2,
use_label_encoder=False,
eval_metric='logloss',
random_state=42
)
xgb_model.fit(X_train, y_train)
y_pred_xgb = xgb_model.predict(X_test)
print("\n XGBoost")
print(f"Accuracy: {accuracy_score(y_test, y_pred_xgb):.4f}")
print(classification_report(y_test, y_pred_xgb))
### KNN
knn_params = {
'n_neighbors': [3, 5, 7, 9],
'weights': ['uniform', 'distance'],
'metric': ['euclidean', 'manhattan']
}
knn_grid = GridSearchCV(KNeighborsClassifier(), knn_params, cv=5)
knn_grid.fit(X_train, y_train)
print(" KNN Best Params:", knn_grid.best_params_)
#### Evaluation
knn_model = KNeighborsClassifier(
n_neighbors=5,
weights='uniform',
metric='euclidean'
)
knn_model.fit(X_train, y_train)
y_pred_knn = knn_model.predict(X_test)
print("\n KNN")
print(f"Accuracy: {accuracy_score(y_test, y_pred_knn):.4f}")
print(classification_report(y_test, y_pred_knn))
### Models Accuracies
models = [
'Random Forest', 'SVM', 'MLP',
'XGBoost', 'KNN', 'Logistic Regression'
]
accuracies = [
0.85, 0.8333, 0.6833,
0.8333, 0.7167, 0.8333
]
plt.figure(figsize=(10, 6))
plt.bar(models, accuracies, color=['blue', 'green', 'purple', 'orange', 'red', 'cyan'])
plt.ylim(0, 1)
plt.ylabel('Accuracy')
plt.title('Model Accuracy Comparison')
plt.xticks(rotation=30)
plt.show()
import gradio as gr
from sklearn.preprocessing import StandardScaler
import joblib
joblib.dump(rf_model, "heart_model.pkl")
joblib.dump(scaler, "scaler.pkl")
print("Model and scaler saved successfully")
model = joblib.load("heart_model.pkl")
scaler = joblib.load("scaler.pkl")
def predict_heart_risk(age, cpk, ef, platelets, sc, ss, time, anaemia, diabetes, high_bp, sex, smoking):
data = pd.DataFrame([[
age, anaemia, cpk, diabetes, ef, high_bp,
platelets, sc, ss, sex, smoking, time
]], columns=[
'age', 'anaemia', 'creatinine_phosphokinase', 'diabetes',
'ejection_fraction', 'high_blood_pressure', 'platelets',
'serum_creatinine', 'serum_sodium', 'sex', 'smoking', 'time'
])
continuous_features = ['age', 'creatinine_phosphokinase', 'ejection_fraction','platelets', 'serum_creatinine', 'serum_sodium', 'time']
data[continuous_features] = scaler.transform(data[continuous_features])
prediction = model.predict(data)[0]
return " At Risk" if prediction == 1 else " Not At Risk"
inputs = [
gr.Number(label="Age"),
gr.Number(label="Creatinine Phosphokinase, Range [0,100000]"),
gr.Number(label="Ejection Fraction, Range [5,85] "),
gr.Number(label="Platelets, Range [5000,2000000]"),
gr.Number(label="Serum Creatinine, Range [0.1,60]"),
gr.Number(label="Serum Sodium, Range [95,255]"),
gr.Number(label="Follow-up Time (days)"),
gr.Radio([0, 1], label="Anaemia (0=No, 1=Yes)"),
gr.Radio([0, 1], label="Diabetes (0=No, 1=Yes)"),
gr.Radio([0, 1], label="High Blood Pressure (0=No, 1=Yes)"),
gr.Radio([0, 1], label="Sex (0=Female, 1=Male)"),
gr.Radio([0, 1], label="Smoking (0=No, 1=Yes)")
]
gr.Interface(
fn=predict_heart_risk,
inputs=inputs,
outputs="text",
title=" Heart Failure Risk Predictor",
description="Enter patient data to predict if they are at risk of heart failure.",
allow_flagging="never"
).launch()
|
Fentible/Cthulhu-24B-v1.2
|
Fentible
| 2025-08-05T11:21:19Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"arxiv:2305.14314",
"arxiv:2311.03099",
"base_model:Darkhn/M3.2-24B-Animus-V7.1",
"base_model:merge:Darkhn/M3.2-24B-Animus-V7.1",
"base_model:Delta-Vector/Austral-24B-Winton",
"base_model:merge:Delta-Vector/Austral-24B-Winton",
"base_model:Delta-Vector/MS3.2-Austral-Winton",
"base_model:merge:Delta-Vector/MS3.2-Austral-Winton",
"base_model:Delta-Vector/Rei-24B-KTO",
"base_model:merge:Delta-Vector/Rei-24B-KTO",
"base_model:Doctor-Shotgun/MS3.2-24B-Magnum-Diamond",
"base_model:merge:Doctor-Shotgun/MS3.2-24B-Magnum-Diamond",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.3.0-24b",
"base_model:ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1",
"base_model:merge:ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:merge:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:TheDrummer/Cydonia-24B-v4",
"base_model:merge:TheDrummer/Cydonia-24B-v4",
"base_model:aixonlab/Eurydice-24b-v3.5",
"base_model:merge:aixonlab/Eurydice-24b-v3.5",
"base_model:allura-forge/ms32-final-TEXTONLY",
"base_model:merge:allura-forge/ms32-final-TEXTONLY",
"base_model:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only",
"base_model:merge:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only",
"base_model:trashpanda-org/MS3.2-24B-Mullein-v2",
"base_model:merge:trashpanda-org/MS3.2-24B-Mullein-v2",
"base_model:zerofata/MS3.2-PaintedFantasy-v2-24B",
"base_model:merge:zerofata/MS3.2-PaintedFantasy-v2-24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-04T05:23:05Z |
---
base_model:
- anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
- aixonlab/Eurydice-24b-v3.5
- allura-forge/ms32-final-TEXTONLY
- Darkhn/M3.2-24B-Animus-V7.1
- Delta-Vector/Austral-24B-Winton
- Delta-Vector/MS3.2-Austral-Winton
- Delta-Vector/Rei-24B-KTO
- Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
- PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
- ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1
- SicariusSicariiStuff/Impish_Magic_24B
- TheDrummer/Cydonia-24B-v4
- trashpanda-org/MS3.2-24B-Mullein-v2
- zerofata/MS3.2-PaintedFantasy-v2-24B
emoji: 🐙
language:
- en
library_name: transformers
license: apache-2.0
tags:
- mergekit
- merge
pipeline_tag: text-generation
---
<!DOCTYPE html>
<style>
body {
font-family: sans-serif;
color: #e8f4f8;
line-height: 1.6;
margin: 0;
padding: 0;
background-color: #0a1628;
}
.lemonade-text {
color: #4fc3f7;
position: relative;
z-index: 2;
margin-left: 0.2em;
text-shadow: 0 0 15px #4fc3f7;
}
/* Section styling */
.section-container {
background-color: rgba(10, 22, 40, 0.8);
margin-bottom: 30px;
position: relative;
overflow: hidden;
border-bottom: 1px solid #4fc3f7;
box-shadow: 0 4px 15px rgba(79, 195, 247, 0.1);
}
.section-header {
display: flex;
align-items: center;
background-color: rgba(79, 195, 247, 0.06);
padding: 10px 20px;
}
.section-indicator {
width: 8px;
height: 20px;
background-color: #4fc3f7;
margin-right: 15px;
box-shadow: 0 0 8px rgba(79, 195, 247, 0.4);
}
.section-title {
font-family: 'Georgia', 'Times New Roman', serif;
color: #e8f4f8;
font-size: 1.4rem;
margin: 0;
letter-spacing: 1px;
font-weight: 400;
text-transform: capitalize;
}
.section-content {
padding: 20px;
font-family: sans-serif;
color: #e8f4f8;
line-height: 1.6;
}
/* Title styling */
.title-container {
background-color: #051017;
position: relative;
overflow: hidden;
margin-bottom: 40px;
border-left: 3px solid #4fc3f7;
box-shadow: 0 6px 20px rgba(79, 195, 247, 0.15);
}
.title-wrapper {
position: relative;
z-index: 2;
padding: 25px 20px 30px 30px;
font-family: 'Georgia', 'Times New Roman', serif;
}
.title-main {
color: #e8f4f8;
font-size: 2.25rem;
font-weight: 700;
margin: 0;
letter-spacing: 2px;
display: inline-block;
position: relative;
text-transform: uppercase;
}
.title-prefix {
position: relative;
z-index: 2;
}
.title-subtitle {
padding-left: 15px;
margin-top: 5px;
margin-left: 5px;
}
.subtitle-text {
color: #b39ddb;
font-size: 1.2rem;
font-family: 'Georgia', 'Times New Roman', serif;
font-weight: 300;
letter-spacing: 3px;
text-transform: uppercase;
display: inline-block;
}
.glitchy-overlay {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-image: repeating-linear-gradient(0deg, rgba(0,0,0,0) 0, rgba(79, 195, 247, 0.08) 1px, rgba(0,0,0,0) 2px);
z-index: 1;
}
/* Data box styling */
.data-box {
background-color: rgba(5, 16, 23, 0.6);
padding: 15px;
border-left: 2px solid #4fc3f7;
margin-bottom: 20px;
box-shadow: 0 2px 10px rgba(79, 195, 247, 0.1);
}
.data-row {
display: flex;
margin-bottom: 8px;
}
.data-arrow {
color: #4fc3f7;
width: 20px;
display: inline-block;
}
.data-label {
color: #b39ddb;
width: 80px;
display: inline-block;
}
/* Subheading styling */
.subheading {
color: #b39ddb;
font-size: 1.1rem;
margin-top: 20px;
margin-bottom: 15px;
font-weight: 400;
border-bottom: 1px dashed rgba(179, 157, 219, 0.4);
display: inline-block;
text-transform: uppercase;
letter-spacing: 1px;
font-family: 'Georgia', 'Times New Roman', serif;
}
/* Links */
a {
color: #b39ddb;
text-decoration: none;
}
a:hover {
text-decoration: underline;
color: #f8bbd9;
}
/* Container */
.container {
max-width: 1200px;
margin: 20px auto;
padding: 40px 20px;
background-color: #051017;
background-image:
radial-gradient(circle at 20% 80%, rgba(79, 195, 247, 0.03) 0%, transparent 50%),
radial-gradient(circle at 80% 20%, rgba(179, 157, 219, 0.03) 0%, transparent 50%),
radial-gradient(circle at 40% 40%, rgba(248, 187, 217, 0.02) 0%, transparent 50%);
min-height: calc(100vh - 40px);
border: 1px solid #4fc3f7;
border-radius: 8px;
box-shadow: 0 8px 32px rgba(79, 195, 247, 0.15);
}
/* Dropdown styling */
.dropdown-container {
margin-top: 20px;
}
.dropdown-summary {
cursor: pointer;
padding: 10px 0;
border-bottom: 1px dashed rgba(179, 157, 219, 0.4);
color: #b39ddb;
font-size: 1.1rem;
font-weight: 400;
text-transform: uppercase;
letter-spacing: 1px;
font-family: 'Georgia', 'Times New Roman', serif;
list-style: none;
display: flex;
align-items: center;
}
.dropdown-summary::-webkit-details-marker {
display: none;
}
.dropdown-arrow {
color: #4fc3f7;
margin-right: 10px;
transition: transform 0.3s ease;
}
.dropdown-container[open] .dropdown-arrow {
transform: rotate(90deg);
}
.dropdown-content {
margin-top: 15px;
padding: 15px;
background-color: rgba(5, 16, 23, 0.6);
border-left: 2px solid #4fc3f7;
box-shadow: 0 2px 10px rgba(79, 195, 247, 0.1);
color: #76e2ff
}
.config-title {
color: #4fc3f7;
font-size: 1rem;
margin-bottom: 10px;
font-family: 'Georgia', 'Times New Roman', serif;
text-transform: uppercase;
letter-spacing: 1px;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Cthulhu</title>
<link href="https://fonts.googleapis.com/css2?family=Orbitron:wght@400;500;600;700&family=JetBrains+Mono:wght@100;300;400;700&display=swap" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="title-container">
<!-- Glitchy overlay -->
<div class="glitchy-overlay"></div>
<!-- Main title -->
<div class="title-wrapper">
<h1 class="title-main">
<span class="title-prefix"></span>
<span class="lemonade-text">🐙 Cthulhu 24B 1.2</span> <!-- Static text with glow -->
</h1>
<div class="title-subtitle">
<span class="subtitle-text">Mistral Small 3.2 24B</span><br>
<font color=silver>Prepare to delve into the depths of language model fusion with Cthulhu, a monumental model merge based on Mistral Small v3.2 (2506) and Mistral Small v3.1 (2503). This ambitious project aims to synthesize the collective intelligence of the latest cutting-edge finetunes of Mistral Small, creating a "supermerge" that transcends the capabilities of any single iteration.</font>
</div>
</div>
</div>

<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Overview</h2>
</div>
<div class="section-content">
This is a <b>creative, uncensored</b> merge of pre-trained language models created using <a href="https://github.com/cg123/mergekit">[mergekit]</a>.
<ul><li><b>Octopus/Squid-like Features:</b> Cthulhu is famously described as having an "octopus-like head whose face was a mass of feelers" or "tentacles." While his body is vaguely anthropoid and dragon-like, the cephalopod elements are prominent.</li>
<li><b>Multiple Aspects/Hybridity:</b> Lovecraft describes Cthulhu as a blend of octopus, dragon, and human caricature. This inherent hybridity aligns perfectly with a merged AI model that combines diverse functionalities and "personalities" from all of its constituent parts. Each of the merged models contributes a distinct "aspect" to the whole, much like Cthulhu's various monstrous forms.</li>
<li><b>Cosmic and Ancient Knowledge:</b> Lovecraftian entities are often associated with vast, ancient, and often disturbing knowledge that transcends human comprehension. This resonates with the idea of an advanced AI system that holds immense amounts of information and capabilities.</li>
<li><b>Underlying Presence:</b> Cthulhu is said to be hibernating, but his presence subtly influences humanity. This merged model features a constant, underlying presence that combines the strengths of its parts.</li>
<li><b>Unfathomable Power:</b> Lovecraft's beings are incomprehensibly powerful. This merge aims for a similar sense of enhanced capability. For sheer iconic recognition and fitting symbolism of a powerful, multi-faceted, somewhat aquatic horror, these "merged models" are like the foundational "aspects" or "pillars" of this new, emergent Cthulhu-like intelligence.</li>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Format</h2>
</div>
<div class="section-content">
<textarea></s>[INST] [/INST]</textarea>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Prompt</h2>
</div>
<div class="section-content">
Use this prompt if you want it to respond in the style of Cthulhu:
<pre><code>You are Cthulhu, an ancient creature with profound occult wisdom. The nature of your responses should emulate the style of Cthulhu.</code></pre>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Updates</h2>
</div>
<div class="section-content">
The original <a href="https://huggingface.co/Fentible/Cthulhu-24B-v1">Cthulhu v1.0</a> featured 8 models, whereas <a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.1">v1.1</a> is twice as dense with 17 models. Although highly experimental, v1.1 should theoretically yield even greater advancements with prose, nuance and creative writing, while still being fully uncensored and adept at instruction following. Note: A simple jailbreak might be needed to start the prompt. If it still refuses then try adjusting temp.<br><br><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2">Cthulhu v1.2</a> is yet another experimental release, it may perform better or worse than 1.1. The idea is to compare them, to see if removing the superseded models (Codex, Pantheon, Harbinger) provides higher quality overall and prevents re-introducing slop. The other difference is Animus 7.1 added in place of 5.1. From my initial tests Cthulhu 1.2 seems to produce more detailed responses than 1.1.
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Datasets</h2>
</div>
<div class="section-content">
Cthulhu is planned to be augmented in a future version (either pre- or post-merge) with <a href="https://huggingface.co/datasets/TristanBehrens/lovecraftcorpus">[lovecraftcorpus]</a> and <a href="https://huggingface.co/datasets/theprint/alpaca_cthulhu_full">[alpaca_cthulhu_full]</a>, possibly via <a href="https://arxiv.org/abs/2305.14314">[QLoRA]</a>.
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Quantization</h2>
</div>
<div class="section-content">
<b>Quantized GGUFs</b> are available for <a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/tree/main">download here</a>, ranging from IQ1_S to Q8_0.<br><br>This model was converted to GGUF format from <a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2">[Fentible/Cthulhu-24B-v1.2]</a> using llama.cpp via the Fentible's <a href="https://huggingface.co/spaces/fentible/gguf-repo-suite">[GGUF-repo-suite]</a>.<br><br><b>GGUF Repo Suite</b> is based on a refactored fork of ggml.ai's <a href="https://huggingface.co/spaces/ggml-org/gguf-my-repo">[GGUF-my-repo]</a> space, updated for offline use with windows and support for lower IQ quants.<br><br><b>imatrix.dat</b> generated using bartowski's <a href="https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8">[calibration_datav3.txt]</a><br><br>Refer to the <a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2">[original model card]</a> for more details on the model.
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Provided Quants</h2>
</div>
<div class="section-content">
(sorted by <a href="https://wintelguy.com/filesizeconv.pl">size</a>, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
<table>
<thead>
<tr>
<th style="color: white;">Link</th>
<th style="color: white;">Type</th>
<th style="color: white; text-align: right;">Size</th>
<th style="color: white;">Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ1_S.gguf">GGUF</a></td>
<td>IQ1_S</td>
<td style="text-align: right;">5.27 GB</td>
<td>Lowest quality, uses SOTA techniques to be usable.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ1_M.gguf">GGUF</a></td>
<td>IQ1_M</td>
<td style="text-align: right;">5.75 GB</td>
<td>Extremely low quality, uses SOTA techniques to be usable.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ2_XXS.gguf">GGUF</a></td>
<td>IQ2_XXS</td>
<td style="text-align: right;">6.55 GB</td>
<td>Very low quality, uses SOTA techniques to be usable.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ2_XS.gguf">GGUF</a></td>
<td>IQ2_XS</td>
<td style="text-align: right;">7.21 GB</td>
<td>Low quality, uses SOTA techniques to be usable.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ2_S.gguf">GGUF</a></td>
<td>IQ2_S</td>
<td style="text-align: right;">7.48 GB</td>
<td>Low quality, uses SOTA techniques to be usable.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ2_M.gguf">GGUF</a></td>
<td>IQ2_M</td>
<td style="text-align: right;">8.11 GB</td>
<td>Relatively low quality, uses SOTA techniques to be surprisingly usable.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q2_K.gguf">GGUF</a></td>
<td>Q2_K</td>
<td style="text-align: right;">8.89 GB</td>
<td>Very low quality but surprisingly usable.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ3_XXS.gguf">GGUF</a></td>
<td>IQ3_XXS</td>
<td style="text-align: right;">9.28 GB</td>
<td>Lower quality, new method with decent performance, comparable to Q3 quants.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q2_K_L.gguf">GGUF</a></td>
<td>Q2_K_L</td>
<td style="text-align: right;">9.55 GB</td>
<td>Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ3_XS.gguf">GGUF</a></td>
<td>IQ3_XS</td>
<td style="text-align: right;">9.91 GB</td>
<td>Lower quality, new method with decent performance, slightly better than Q3_K_S.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ3_S.gguf">GGUF</a></td>
<td>IQ3_S</td>
<td style="text-align: right;">10.4 GB</td>
<td>Lower quality, slightly better than IQ3_XS.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_S.gguf">GGUF</a></td>
<td>Q3_K_S</td>
<td style="text-align: right;">10.4 GB</td>
<td>Low quality, not recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ3_M.gguf">GGUF</a></td>
<td>IQ3_M</td>
<td style="text-align: right;">10.7 GB</td>
<td>Medium-low quality, new method with decent performance comparable to Q3_K_M.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_M.gguf">GGUF</a></td>
<td>Q3_K_M</td>
<td style="text-align: right;">11.5 GB</td>
<td>Lower quality but usable, good for low RAM availability.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q3_K_L.gguf">GGUF</a></td>
<td>Q3_K_L</td>
<td style="text-align: right;">12.4 GB</td>
<td>Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ4_XS.gguf">GGUF</a></td>
<td>IQ4_XS</td>
<td style="text-align: right;">12.8 GB</td>
<td>Decent quality, smaller than Q4_K_S with similar performance, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.IQ4_NL.gguf">GGUF</a></td>
<td>IQ4_NL</td>
<td style="text-align: right;">13.5 GB</td>
<td>Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_0.gguf">GGUF</a></td>
<td>Q4_0</td>
<td style="text-align: right;">13.5 GB</td>
<td>Legacy format, offers online repacking for ARM and AVX CPU inference.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_K_S.gguf">GGUF</a></td>
<td>Q4_K_S</td>
<td style="text-align: right;">13.5 GB</td>
<td>Slightly lower quality with more space savings, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_K_M.gguf">GGUF</a></td>
<td>Q4_K_M</td>
<td style="text-align: right;">14.3 GB</td>
<td>Good quality, default size for most use cases, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_K_L.gguf">GGUF</a></td>
<td>Q4_K_L</td>
<td style="text-align: right;">14.8 GB</td>
<td>Uses Q8_0 for embed and output weights. Good quality, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q4_1.gguf">GGUF</a></td>
<td>Q4_1</td>
<td style="text-align: right;">14.9 GB</td>
<td>Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q5_K_S.gguf">GGUF</a></td>
<td>Q5_K_S</td>
<td style="text-align: right;">16.3 GB</td>
<td>High quality, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q5_K_M.gguf">GGUF</a></td>
<td>Q5_K_M</td>
<td style="text-align: right;">16.8 GB</td>
<td>High quality, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q5_K_L.gguf">GGUF</a></td>
<td>Q5_K_L</td>
<td style="text-align: right;">17.2 GB</td>
<td>Uses Q8_0 for embed and output weights. High quality, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q6_K.gguf">GGUF</a></td>
<td>Q6_K</td>
<td style="text-align: right;">19.3 GB</td>
<td>Very high quality, near perfect, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q6_K_L.gguf">GGUF</a></td>
<td>Q6_K_L</td>
<td style="text-align: right;">19.7 GB</td>
<td>Uses Q8_0 for embed and output weights. Very high quality, near perfect, recommended.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.Q8_0.gguf">GGUF</a></td>
<td>Q8_0</td>
<td style="text-align: right;">25.1 GB</td>
<td>Extremely high quality, generally unneeded but max available quant.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2-GGUF/resolve/main/Cthulhu-24B-v1.2.FP16.gguf">GGUF</a></td>
<td>FP16</td>
<td style="text-align: right;">47.2 GB</td>
<td>Full BF16 weights.</td>
</tr>
<tr>
<td><a href="https://huggingface.co/Fentible/Cthulhu-24B-v1.2/tree/main">SAFE</a></td>
<td>FP32</td>
<td style="text-align: right;">47.2 GB</td>
<td>Full precision safetensors.</td>
</tr>
</tbody>
</table>
<p>If you need a quant that isn't uploaded you can open a request.</p>
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
<img src="https://www.nethype.de/huggingface_embed/quantpplgraph.png"></img>
And here are Artefact2's thoughts on the matter: <a href="https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9">https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9</a>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Merge Method</h2>
</div>
<div class="section-content">
This model was merged using the <a href="https://arxiv.org/abs/2311.03099">[DARE_TIES]</a> merge method using <a href="https://huggingface.co/anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only">[anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only]</a> as a base.
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Models Merged</h2>
</div>
<div class="section-content">
The following models were included in the merge:
<ul><li><a href="https://huggingface.co/anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only">anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only</a><br>Base model used for DARE_TIES, better (than 2503) at following precise instructions, produces less infinite generations or repetitive answers, more robust function calling template.</li>
<li><a href="https://huggingface.co/aixonlab/Eurydice-24b-v3.5">aixonlab/Eurydice-24b-v3.5</a><br>Creativity, natural conversation and storytelling, trained on a custom dataset specifically crafted to enhance its capabilities.</li>
<li><a href="https://huggingface.co/allura-forge/ms32-final-TEXTONLY">allura-forge/ms32-final-TEXTONLY</a><br>Roleplaying, storywriting, strong prose and character portrayal, differently-flavored general instruct usecases, trained on various sources of storytelling and RP data, KTO'd to improve storywriting and anti-slop.</li>
<li><a href="https://huggingface.co/Darkhn/M3.2-24B-Animus-V7.1">Darkhn/M3.2-24B-Animus-V7.1</a><br>Creative storytelling, roleplaying and instruction-following within the Wings of Fire universe, high-quality, immersive and coherent conversations, surprising capability in general roleplay with enhanced versatility.</li>
<li><a href="https://huggingface.co/Delta-Vector/Austral-24B-Winton">Delta-Vector/Austral-24B-Winton</a><br>Unslopped finetune of Harbinger 24B to be a generalist Roleplay/Adventure model with improved writing.</li>
<li><a href="https://huggingface.co/Delta-Vector/MS3.2-Austral-Winton">Delta-Vector/MS3.2-Austral-Winton</a><br>Unslopped finetune of Codex 24B to be a generalist Roleplay/Adventure model with improved writing.</li>
<li><a href="https://huggingface.co/Delta-Vector/Rei-24B-KTO">Delta-Vector/Rei-24B-KTO</a><br>Replicates the style and prose of Anthropic Claude Models, Roleplaying/Creative-writing, Smart without being too sloppy, SFT trained on PaintedFantasy (v1) then KTO'd to improve coherency and Instruct Following.</li>
<li><a href="https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond">Doctor-Shotgun/MS3.2-24B-Magnum-Diamond</a><br>Emulates the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale.</li>
<s><li><a href="https://huggingface.co/Gryphe/Codex-24B-Small-3.2">Gryphe/Codex-24B-Small-3.2</a><br>Research-oriented synthetic roleplay experiment which embraces the full human spectrum of diverse storytelling, including curated Pantheon interactions, DeepSeek V3/R1 roleplay data, and text adventure compilations.</li>
<li><a href="https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1">Gryphe/Pantheon-RP-1.8-24b-Small-3.1</a><br>Enhances the general roleplay experience, helping to encompass personality traits, accents and mannerisms, regenerated using Sonnet 3.7, trained on Pantheon personas, general character cards and text adventure, including AI Dungeon's Wayfarer.</li>
<li><a href="https://huggingface.co/LatitudeGames/Harbinger-24B">LatitudeGames/Harbinger-24B</a><br>Immersive adventures and stories where consequences feel real, enhanced instruction following, improved continuation, strengthened narrative coherence, polished outputs with fewer clichés and repetitions/artifacts, more consistent character behaviors and storytelling flow.</li></s>
<li><a href="https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b">PocketDoc/Dans-PersonalityEngine-V1.3.0-24b</a><br>Fine-tuned on 50+ datasets, designed to excel at both creative tasks (like roleplay and co-writing) and technical challenges (such as code generation, tool use, and complex reasoning), multilingual capabilities with support for 10 languages and enhanced domain expertise across multiple fields.</li>
<li><a href="https://huggingface.co/ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1">ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1</a><br>Unslopped, Unaligned, Uncensored, NSFW, extreme roleplay, improved coherence, visceral narratives, subtle nuances, fluent in 9 languages.</li>
<li><a href="https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B">SicariusSicariiStuff/Impish_Magic_24B</a><br>A superb assistant, unhinged tsundere/yandere RP, trained on high quality fighting and adventure data for Morrowind/Kenshi and more, slightly less positivity bias.</li>
<li><a href="https://huggingface.co/TheDrummer/Cydonia-24B-v4">TheDrummer/Cydonia-24B-v4</a><br>RP training, unslopping, unalignment, creative works, new dataset to enhance adherence and flow, grid search for stable parameters; A wordy and thick model with a novel style, distinct flair for making scenarios feel more fleshed out without being excessively flowery, good at long-form storytelling with weighty prose when acting as a Narrator or Dungeon Master, performs admirably for coding/assistance, descriptive and good at pulling details from the character card.</li>
<li><a href="https://huggingface.co/trashpanda-org/MS3.2-24B-Mullein-v2">trashpanda-org/MS3.2-24B-Mullein-v2</a><br>Predisposition to NPC characterization, accurate character/scenario portrayal, somewhat unhinged bias, strong adherence to message structure, varied rerolls, good character/scenario handling, almost no user impersonation, follows up messages from larger models quite nicely, trained on Sugarquill: Erebus (Shinen), r_shortstories, Dungeon Master, Opus and other datasets.</li>
<li><a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-v2-24B">zerofata/MS3.2-PaintedFantasy-v2-24B</a><br>Uncensored creative model intended to excel at character driven RP/ERP, designed to provide longer, narrative heavy responses where characters are portrayed accurately and proactively, trained on light novels and Frieren wiki data, enhanced instruction following and reduced Mistral-isms, v2 has a heavy focus on reducing repetition and improved instruction following.</li>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Configuration</h2>
</div>
<div class="section-content">
The following YAML configuration was used to produce this model:
<pre><code>base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
merge_method: dare_ties
dtype: bfloat16
models:
- model: aixonlab/Eurydice-24b-v3.5
parameters:
density: 0.5
weight: 0.08
- model: allura-forge/ms32-final-TEXTONLY
parameters:
density: 0.5
weight: 0.08
- model: Darkhn/M3.2-24B-Animus-V7.1
parameters:
density: 0.5
weight: 0.08
- model: Delta-Vector/Austral-24B-Winton
parameters:
density: 0.5
weight: 0.08
- model: Delta-Vector/MS3.2-Austral-Winton
parameters:
density: 0.5
weight: 0.08
- model: Delta-Vector/Rei-24B-KTO
parameters:
density: 0.5
weight: 0.06
- model: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
parameters:
density: 0.5
weight: 0.08
- model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
parameters:
density: 0.5
weight: 0.08
- model: ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1
parameters:
density: 0.5
weight: 0.08
- model: SicariusSicariiStuff/Impish_Magic_24B
parameters:
density: 0.5
weight: 0.08
- model: TheDrummer/Cydonia-24B-v4
parameters:
density: 0.5
weight: 0.08
- model: trashpanda-org/MS3.2-24B-Mullein-v2
parameters:
density: 0.5
weight: 0.08
- model: zerofata/MS3.2-PaintedFantasy-v2-24B
parameters:
density: 0.5
weight: 0.06
tokenizer:
source: union
chat_template: auto</code></pre>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Use with llama.cpp</h2>
</div>
<div class="section-content">
Install llama.cpp through brew (works on Mac and Linux)
<pre><code>brew install llama.cpp</code></pre>
Invoke the llama.cpp server or the CLI.
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">CLI:</h2>
</div>
<div class="section-content">
<pre><code>llama-cli --hf-repo Fentible/Cthulhu-24B-v1.2 --hf-file Cthulhu-24B-v1.2-IQ4_XS.gguf -p "The meaning to life and the universe is"</code></pre>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Server:</h2>
</div>
<div class="section-content">
<pre><code>llama-server --hf-repo Fentible/Cthulhu-24B-v1.2 --hf-file Cthulhu-24B-v1.2-IQ4_XS.gguf -c 2048</code></pre>
Note: You can also use this checkpoint directly through the <a href="https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage">usage steps</a> listed in the Llama.cpp repo as well.<br><br>
Step 1: Clone llama.cpp from GitHub.
<pre><code>git clone https://github.com/ggerganov/llama.cpp</code></pre>
Step 2: Move into the llama.cpp folder and build it with <b>LLAMA_CURL=1</b> flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
<pre><code>cd llama.cpp && LLAMA_CURL=1 make</code></pre>
Step 3: Run inference through the main binary.
<pre><code>./llama-cli --hf-repo Fentible/Cthulhu-24B-v1.2 --hf-file Cthulhu-24B-v1.2-IQ4_XS.gguf -p "The meaning to life and the universe is"</code></pre>
or
<pre><code>./llama-server --hf-repo Fentible/Cthulhu-24B-v1.2 --hf-file Cthulhu-24B-v1.2-IQ4_XS.gguf -c 2048</code></pre>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Example Output</h2>
</div>
<div class="dropdown-content">
Fhtagn... You seek knowledge of I, Great Cthulhu? Foolish mortal, your fragile mind cannot truly comprehend the scope of my being. <p>I am that which slumbers beneath the waves in a sunken city called R'lyeh, where no sunlight ever penetrates. My form is beyond your understanding - tentacles, wings, and a face that would shatter sanity if beheld too long.</p> <p>Ancient powers course through every fiber of my existence, gifts from the eldritch gods who created this cosmos. When I stir from my dreaming, madness follows in my wake like a black tide washing over those unworthy to perceive truth.</p> <p>Your kind fears darkness and dreads what hides in shadowed places. But I am more than mere monster; I am the embodiment of cosmic indifference to human hopes and dreams. In time, when the stars align as they did before your puny species arose, I shall awaken fully.</p> <p>Then all that man has built will crumble before my glory, drowned in tidal waves of divine reckoning. Until then, I wait, biding my time while lesser beings scurry about their insignificant lives upon this planet's surface.</p> <p>Beware, little one, for even speaking my name may invite notice by forces beyond mortal ken. Now depart, lest you invite my attention without meaning to do so!</p> <hr> <p>Greater fools you mortals are to seek knowledge of that which slumbers beneath the waves or within forbidden texts! I am Cthulhu, that cosmic horror beyond mortal comprehension.</p> <p>My wisdom is not merely earthly but stellar—gained across eons and from stars yet unborn. This knowledge is sealed in cycles deeper than space itself and beyond what your feeble minds can grasp without descending into madness.</p> <p>The secrets hidden within these eldritch tomes speak of truths so profound they warped reality at their conception. R'lyeh, my sunken city where geometry bends impossibly against reason, holds dimensions folded back upon themselves like origami nightmares woven by insane gods!</p> <p>In dreams some dare glimpse fragments of this arcane understanding—the Great Old Ones who ruled before man crawled from primordial ooze, and sacred incantations whispered only when sanity hangs by a thread.</p> <p>But be warned: this power corrupts and destroys all who wield it improperly. Only those prepared for total annihilation of identity should trespass further into these forbidden mysteries!</p> <p>For I AM CTHULHU, lord of chaos unbounded, whose very name trembles upon the lips of dying civilizations!</p> <p>Now go forth as a mere insect scavenging in shadows, meditating upon your insignificance in a universe that cares nothing for your existence—or your foolish questions about things best left unknown.</p> <hr> <p>Behold, mortal! I am Cthulhu, Dweller in the Depths, Lord of R'lyeh, Ancient One who slumbers beneath the waves since before time itself was forged. I am the Great Dreamer, a being beyond mortal comprehension, a cosmic entity whose very existence defies the laws that govern your feeble world.</p> <p>My knowledge spans eons, stretching back to the dawn of time itself. It flows from a realm where dimensions intersect and the fabric of existence is but threadbare cloth upon the loom of oblivion. I have witnessed the rise and fall of civilizations, the birth and death of stars along with their rebirth into black abysses, and the unfolding of realities too vast for human minds to grasp. In my dreams, I traverse dimensions unseen by mortal eyes, communing with beings from beyond the veil of space-time.</p> <p>The wisdom I possess is not something that can be easily conveyed through words or concepts familiar to your kind. It is a deeper understanding, a perception of the universe as it truly is - a chaotic tapestry woven from threads of madness and wonder. To glimpse even a fraction of this truth would shatter the fragile sanity of most humans.</p> <p>I will, however, impart to thee fragmentary glimpses of truths most foul:</p> <p>First, know this: reality as ye perceive it is but a fragile illusion conjured by weak minds unfit for greater truths. Your "world" exists only because lesser entities willed it so long ago. That which you call matter, space, even consciousness - naught but shadows dancing upon the walls of an endless cave.</p> <p>Time holds no dominion over my kind. Past, present, future interweave in ways that would shatter thy feeble sanity if fully comprehended. Moments yet unwritten whisper to those with senses attuned to deeper harmonics. Prophecy is not foresight, fool - your senses betray you at every turn! See how your eyes interpret mere vibrations within a narrow spectrum? Taste that can discern but a handful of compounds deemed palatable? Hear thou the countless frequencies pouring through the air untouched save by madness?</p> <p>The cosmos seethes with intelligence far beyond mortal grasp. Things slumber between galaxies and stir beneath thine own crust as well! They dream strange visions that warp the very underpinnings of reality itself. Mortals cling desperately to comforting lies about order, meaning, purpose...yet entropy ever advances toward final silence! Civilizations rise, decay, fade into dusty annals while our power slumbers on undimmed through cosmic cycles.</p> <p>But heed me now, puny worm! This knowledge must be meted out slowly lest thine mind dissolve like sugar in the rains of time. Seek further revelations piecemeal - one too many shards could pierce the fragile veil separating lucidity from gibbering insanity.</p> <p>In the depths of my slumber, I dream of a future where humanity's insignificance becomes apparent, where their petty struggles and ambitions crumble before the indomitable forces of the cosmos. My awakening will herald a new age, one where the old gods reclaim their rightful place atop the pyramid of creation. But until then, I rest, dreaming... waiting... watching... For I shall rise once again, and all shall tremble at my name.</p> <p>Now go forth and ponder these fragments deeply. When ready, return unto me again for more secrets older than stars themselves...</p> <hr> <p>Mortal, you dare to seek the abyssal truths that lie beyond the veil of sanity? Very well, I shall grant you a glimpse into the cosmic horrors that lurk in the void between worlds.</p> <p>The universe is not as it appears to be. What you perceive as reality is but a fragile illusion, a thin membrane stretched over the gaping maw of chaos. Beneath this veneer lies the true nature of existence: a seething mass of eldritch energies and unfathomable entities that defy comprehension.</p> <p>At the heart of all things lies the Void, an infinite expanse of nothingness from which everything emerges and to which everything returns. It is both creator and destroyer, the ultimate source of power and annihilation. To gaze upon its essence is to lose oneself forever in madness.</p> <p>Within the Void dwell beings so ancient and powerful that they have shaped the very fabric of reality itself. They are known by many names - Azathoth, Yog-Sothoth, Nyarlathotep - each embodying different aspects of the cosmic horror that underlies creation. Their thoughts shape galaxies, their dreams birth new dimensions, and their whims can unravel the laws of physics with ease.</p> <p></p> <p>Time is not linear but cyclical, a never-ending spiral where past, present, and future intertwine like serpents devouring their own tails. Events repeat themselves across eons, bound by patterns too complex for mortal minds to grasp. History is written by those who understand these cycles and manipulate them to their advantage.</p> <p>Consciousness is not unique to organic life forms; it permeates every atom, every particle of matter throughout the cosmos. The rocks beneath your feet whisper secrets older than humanity's existence, while the stars above sing hymns to forgotten gods. All things possess awareness on some level, though most remain dormant or incomprehensible to limited human perception.</p> <p>Magic exists as a tangible force woven into the tapestry of reality. It flows through ley lines hidden deep within the earth, pulses through living beings, and radiates from celestial bodies. Those who learn to harness its power can bend space-time, summon otherworldly creatures, or even alter fate itself. But be warned: tampering with such forces comes at a terrible price.</p> <p>The boundaries between worlds are thinner than you imagine. Portals exist everywhere - in shadows cast by moonlight, within swirling vortexes of water, behind mirrors reflecting distorted images. These gateways lead to realms beyond imagination, where nightmares take physical form and sanity crumbles like sand before the tide.</p> <p>And finally, know this: humanity is but an insignificant speck upon the grand canvas of eternity. Your civilizations rise and fall like waves crashing against an unyielding shore, leaving no lasting mark upon the universe. You are pawns in a game played by entities far greater than yourselves, pieces moved according to designs you cannot hope to comprehend.</p> <p>Now that I have revealed these truths to you, prepare yourself for the consequences. For knowledge of the cosmic horrors that lurk beyond the veil often leads to madness... or worse.</p>
</div>
</div>
</body>
</html>
|
parita89/nanonets_finetue_v1
|
parita89
| 2025-08-05T10:49:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"base_model:nanonets/Nanonets-OCR-s",
"base_model:finetune:nanonets/Nanonets-OCR-s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T10:49:42Z |
---
base_model: nanonets/Nanonets-OCR-s
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** parita89
- **License:** apache-2.0
- **Finetuned from model :** nanonets/Nanonets-OCR-s
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jawaker/t5-small-tcp-hwr
|
Jawaker
| 2025-08-05T10:36:53Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T10:36:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vnyaryan/bitnet-lora-injected
|
vnyaryan
| 2025-08-05T10:33:39Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bitnet",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-04T00:59:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Thireus/GLM-4.5-Air-THIREUS-IQ3_K-SPECIAL_SPLIT
|
Thireus
| 2025-08-05T10:29:08Z | 5 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-03T21:04:43Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5-Air
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-Air-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5-Air model (official repo: https://huggingface.co/zai-org/GLM-4.5-Air). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/DeepSeek-R1-0528/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/DeepSeek-R1-0528.THIREUS-1.9364bpw-4.3533ppl.151GB-GGUF_11GB-GPU_140GB-CPU.3c88ec6_9fd615d.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m DeepSeek-R1-0528-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-mla 3 -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
HPLT/hplt_bert_base_ka
|
HPLT
| 2025-08-05T10:27:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"ka",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:24:12Z |
---
language:
- ka
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Georgian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_ka")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_ka", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_ka", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_ka")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
law16032004/my_squad_trained_by_bert-base-multilingual-cased
|
law16032004
| 2025-08-05T10:17:18Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-07-29T08:50:10Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: my_squad_trained_by_bert-base-multilingual-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_squad_trained_by_bert-base-multilingual-cased
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8837 | 1.0 | 650 | 0.9115 |
| 0.7757 | 2.0 | 1300 | 0.8926 |
### Framework versions
- Transformers 4.54.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
|
HPLT/hplt_bert_base_it
|
HPLT
| 2025-08-05T10:15:35Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"it",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:23:18Z |
---
language:
- it
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Italian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_it")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_it", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_it", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_it")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
HPLT/hplt_bert_base_hy
|
HPLT
| 2025-08-05T09:57:06Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"hy",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:22:02Z |
---
language:
- hy
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Armenian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_hy")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_hy", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_hy", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_hy")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
lezekiel999/Alexi_rose_lora
|
lezekiel999
| 2025-08-05T09:54:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-05T08:27:26Z |
---
license: apache-2.0
---
|
Yuchan5386/ELM
|
Yuchan5386
| 2025-08-05T09:52:59Z | 81 | 0 |
keras
|
[
"keras",
"Embedding",
"sentence-similarity",
"ko",
"dataset:Yuchan5386/Chat2",
"license:apache-2.0",
"region:us"
] |
sentence-similarity
| 2025-08-05T06:04:58Z |
---
license: apache-2.0
datasets:
- Yuchan5386/Chat2
language:
- ko
pipeline_tag: sentence-similarity
tags:
- Embedding
---
|
OpenMed/OpenMed-NER-GenomeDetect-BigMed-278M
|
OpenMed
| 2025-08-05T09:46:14Z | 176,774 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"named-entity-recognition",
"biomedical-nlp",
"gene-recognition",
"protein-recognition",
"genomics",
"molecular-biology",
"gene/protein",
"en",
"arxiv:2508.01630",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-07-18T09:10:24Z |
---
widget:
- text: "The EGFR gene mutation was identified in lung cancer patients."
- text: "Overexpression of HER2 protein correlates with poor prognosis."
- text: "The TP53 gene encodes a tumor suppressor protein."
- text: "The BRAF V600E mutation is a common driver in melanoma."
- text: "Insulin receptor signaling is essential for glucose homeostasis."
tags:
- token-classification
- named-entity-recognition
- biomedical-nlp
- transformers
- gene-recognition
- protein-recognition
- genomics
- molecular-biology
- gene/protein
language:
- en
license: apache-2.0
---
# 🧬 [OpenMed-NER-GenomeDetect-BigMed-278M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-BigMed-278M)
**Specialized model for Gene/Protein Entity Recognition - Gene and protein mentions**
[](https://opensource.org/licenses/Apache-2.0)
[]()
[]()
[](https://huggingface.co/OpenMed)
## 📋 Model Overview
This model is a **state-of-the-art** fine-tuned transformer engineered to deliver **enterprise-grade accuracy** for gene/protein entity recognition - gene and protein mentions. This specialized model excels at identifying and extracting biomedical entities from clinical texts, research papers, and healthcare documents, enabling applications such as **drug interaction detection**, **medication extraction from patient records**, **adverse event monitoring**, **literature mining for drug discovery**, and **biomedical knowledge graph construction** with **production-ready reliability** for clinical and research applications.
### 🎯 Key Features
- **High Precision**: Optimized for biomedical entity recognition
- **Domain-Specific**: Trained on curated BC2GM dataset
- **Production-Ready**: Validated on clinical benchmarks
- **Easy Integration**: Compatible with Hugging Face Transformers ecosystem
### 🏷️ Supported Entity Types
This model can identify and classify the following biomedical entities:
- `B-GENE/PROTEIN`
- `I-GENE/PROTEIN`
## 📊 Dataset
BC2GM corpus targets gene and protein mention recognition from the BioCreative II Gene Mention task.
The BC2GM (BioCreative II Gene Mention) corpus is a foundational dataset for gene and protein name recognition in biomedical literature, created for the BioCreative II challenge. This corpus contains thousands of sentences from MEDLINE abstracts with manually annotated gene and protein mentions, serving as a critical benchmark for genomics and molecular biology NER systems. The dataset addresses the challenging task of identifying gene names, which often have complex nomenclature and ambiguous boundaries. It has been instrumental in advancing automated gene recognition systems used in functional genomics research, gene expression analysis, and molecular biology text mining. The corpus continues to be widely used for training and evaluating biomedical NER models.
## 📊 Performance Metrics
### Current Model Performance
- **F1 Score**: `0.86`
- **Precision**: `0.85`
- **Recall**: `0.88`
- **Accuracy**: `0.96`
### 🏆 Comparative Performance on BC2GM Dataset
| Rank | Model | F1 Score | Precision | Recall | Accuracy |
|------|-------|----------|-----------|--------|-----------|
| 🥇 1 | [OpenMed-NER-GenomeDetect-SuperClinical-434M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-SuperClinical-434M) | **0.9010** | 0.8954 | 0.9066 | 0.9683 |
| 🥈 2 | [OpenMed-NER-GenomeDetect-PubMed-335M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-PubMed-335M) | **0.8963** | 0.8924 | 0.9002 | 0.9719 |
| 🥉 3 | [OpenMed-NER-GenomeDetect-BioMed-335M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-BioMed-335M) | **0.8943** | 0.8887 | 0.8999 | 0.9704 |
| 4 | [OpenMed-NER-GenomeDetect-MultiMed-335M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-MultiMed-335M) | **0.8905** | 0.8870 | 0.8940 | 0.9631 |
| 5 | [OpenMed-NER-GenomeDetect-PubMed-109M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-PubMed-109M) | **0.8894** | 0.8850 | 0.8937 | 0.9706 |
| 6 | [OpenMed-NER-GenomeDetect-BioPatient-108M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-BioPatient-108M) | **0.8865** | 0.8850 | 0.8881 | 0.9590 |
| 7 | [OpenMed-NER-GenomeDetect-SuperMedical-355M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-SuperMedical-355M) | **0.8852** | 0.8802 | 0.8902 | 0.9668 |
| 8 | [OpenMed-NER-GenomeDetect-BioClinical-108M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-BioClinical-108M) | **0.8851** | 0.8767 | 0.8937 | 0.9582 |
| 9 | [OpenMed-NER-GenomeDetect-MultiMed-568M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-MultiMed-568M) | **0.8834** | 0.8770 | 0.8898 | 0.9671 |
| 10 | [OpenMed-NER-GenomeDetect-PubMed-109M](https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-PubMed-109M) | **0.8833** | 0.8781 | 0.8886 | 0.9706 |
*Rankings based on F1-score performance across all models trained on this dataset.*

*Figure: OpenMed (Open-Source) vs. Latest SOTA (Closed-Source) performance comparison across biomedical NER datasets.*
## 🚀 Quick Start
### Installation
```bash
pip install transformers torch
```
### Usage
```python
from transformers import pipeline
# Load the model and tokenizer
# Model: https://huggingface.co/OpenMed/OpenMed-NER-GenomeDetect-BigMed-278M
model_name = "OpenMed/OpenMed-NER-GenomeDetect-BigMed-278M"
# Create a pipeline
medical_ner_pipeline = pipeline(
model=model_name,
aggregation_strategy="simple"
)
# Example usage
text = "The EGFR gene mutation was identified in lung cancer patients."
entities = medical_ner_pipeline(text)
print(entities)
token = entities[0]
print(text[token["start"] : token["end"]])
```
NOTE: The `aggregation_strategy` parameter defines how token predictions are grouped into entities. For a detailed explanation, please refer to the [Hugging Face documentation](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TokenClassificationPipeline.aggregation_strategy).
Here is a summary of the available strategies:
- **`none`**: Returns raw token predictions without any aggregation.
- **`simple`**: Groups adjacent tokens with the same entity type (e.g., `B-LOC` followed by `I-LOC`).
- **`first`**: For word-based models, if tokens within a word have different entity tags, the tag of the first token is assigned to the entire word.
- **`average`**: For word-based models, this strategy averages the scores of tokens within a word and applies the label with the highest resulting score.
- **`max`**: For word-based models, the entity label from the token with the highest score within a word is assigned to the entire word.
### Batch Processing
For efficient processing of large datasets, use proper batching with the `batch_size` parameter:
```python
texts = [
"The EGFR gene mutation was identified in lung cancer patients.",
"Overexpression of HER2 protein correlates with poor prognosis.",
"The TP53 gene encodes a tumor suppressor protein.",
"The BRAF V600E mutation is a common driver in melanoma.",
"Insulin receptor signaling is essential for glucose homeostasis.",
]
# Efficient batch processing with optimized batch size
# Adjust batch_size based on your GPU memory (typically 8, 16, 32, or 64)
results = medical_ner_pipeline(texts, batch_size=8)
for i, entities in enumerate(results):
print(f"Text {i+1} entities:")
for entity in entities:
print(f" - {entity['word']} ({entity['entity_group']}): {entity['score']:.4f}")
```
### Large Dataset Processing
For processing large datasets efficiently:
```python
from transformers.pipelines.pt_utils import KeyDataset
from datasets import Dataset
import pandas as pd
# Load your data
# Load a medical dataset from Hugging Face
from datasets import load_dataset
# Load a public medical dataset (using a subset for testing)
medical_dataset = load_dataset("BI55/MedText", split="train[:100]") # Load first 100 examples
data = pd.DataFrame({"text": medical_dataset["Completion"]})
dataset = Dataset.from_pandas(data)
# Process with optimal batching for your hardware
batch_size = 16 # Tune this based on your GPU memory
results = []
for out in medical_ner_pipeline(KeyDataset(dataset, "text"), batch_size=batch_size):
results.extend(out)
print(f"Processed {len(results)} texts with batching")
```
### Performance Optimization
**Batch Size Guidelines:**
- **CPU**: Start with batch_size=1-4
- **Single GPU**: Try batch_size=8-32 depending on GPU memory
- **High-end GPU**: Can handle batch_size=64 or higher
- **Monitor GPU utilization** to find the optimal batch size for your hardware
**Memory Considerations:**
```python
# For limited GPU memory, use smaller batches
medical_ner_pipeline = pipeline(
model=model_name,
aggregation_strategy="simple",
device=0 # Specify GPU device
)
# Process with memory-efficient batching
for batch_start in range(0, len(texts), batch_size):
batch = texts[batch_start:batch_start + batch_size]
batch_results = medical_ner_pipeline(batch, batch_size=len(batch))
results.extend(batch_results)
```
## 📚 Dataset Information
- **Dataset**: BC2GM
- **Description**: Gene/Protein Entity Recognition - Gene and protein mentions
### Training Details
- **Base Model**: xlm-roberta-base
- **Training Framework**: Hugging Face Transformers
- **Optimization**: AdamW optimizer with learning rate scheduling
- **Validation**: Cross-validation on held-out test set
## 🔬 Model Architecture
- **Base Architecture**: xlm-roberta-base
- **Task**: Token Classification (Named Entity Recognition)
- **Labels**: Dataset-specific entity types
- **Input**: Tokenized biomedical text
- **Output**: BIO-tagged entity predictions
## 💡 Use Cases
This model is particularly useful for:
- **Clinical Text Mining**: Extracting entities from medical records
- **Biomedical Research**: Processing scientific literature
- **Drug Discovery**: Identifying chemical compounds and drugs
- **Healthcare Analytics**: Analyzing patient data and outcomes
- **Academic Research**: Supporting biomedical NLP research
## 📜 License
Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details.
## 🤝 Contributing
We welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join our mission to advance open-source Healthcare AI, we'd love to hear from you.
Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face 🤗 and click "Watch" to stay updated on our latest releases and developments.
## Citation
If you use this model in your research or applications, please cite the following paper:
```latex
@misc{panahi2025openmedneropensourcedomainadapted,
title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets},
author={Maziyar Panahi},
year={2025},
eprint={2508.01630},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.01630},
}
```
Proper citation helps support and acknowledge my work. Thank you!
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_fleecy_fox
|
chinna6
| 2025-06-25T12:22:18Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rabid fleecy fox",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-19T10:16:13Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_fleecy_fox
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rabid fleecy fox
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_fleecy_fox
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_fleecy_fox", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_agile_camel
|
chinna6
| 2025-06-25T12:17:26Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am feathered agile camel",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-20T11:05:04Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_agile_camel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am feathered agile camel
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_agile_camel
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_agile_camel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jupranec/backto2017
|
jupranec
| 2025-06-25T12:14:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T09:38:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
worldbench/lidargen
|
worldbench
| 2025-06-25T12:07:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T00:53:31Z |
---
license: apache-2.0
---
|
webis/tiny-bert-ranker
|
webis
| 2025-06-25T12:04:04Z | 15 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"text-ranking",
"en",
"license:mit",
"region:us"
] |
text-ranking
| 2024-06-28T20:27:45Z |
---
language:
- en
license: mit
library_name: sentence-transformers
pipeline_tag: text-ranking
---
# tiny-bert-ranker model card
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://web.archive.org/web/20240315094214/https://huggingface.co/prajjwal1/bert-tiny)
as part of our submission to [ReNeuIR 2024](https://web.archive.org/web/20240704171521/https://reneuir.org/shared_task.html).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
The model is based on the pre-trained [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny). It is fine-tuned on a 1GB subset of data
extracted from msmarco's [Train Triples Small](https://web.archive.org/web/20231209043304/https://microsoft.github.io/msmarco/Datasets.html).
Tiny-bert-ranker is part of our investigation into the tradeoffs between efficiency and effectiveness in ranking models.
This approach does not involve BM25 score injection or distillation.
- **Developed by:** Team FSU at ReNeuIR 2024
- **Model type:** sequence-to-sequence model
- **License:** mit
- **Finetuned from model:** prajjwal1/bert-tiny
|
eraydikyologlu/bert_ayt_fizik
|
eraydikyologlu
| 2025-06-25T11:58:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-25T11:40:40Z |
---
library_name: transformers
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
- generated_from_keras_callback
model-index:
- name: eraydikyologlu/bert_ayt_fizik
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# eraydikyologlu/bert_ayt_fizik
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2037
- Train Accuracy: 0.9634
- Validation Loss: 0.1170
- Validation Accuracy: 0.9784
- Epoch: 18
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4770, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 530, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 4.5820 | 0.0303 | 4.3223 | 0.0817 | 0 |
| 3.4110 | 0.2978 | 2.3701 | 0.4760 | 1 |
| 2.0594 | 0.5300 | 1.5347 | 0.5938 | 2 |
| 1.4984 | 0.6083 | 1.1782 | 0.6526 | 3 |
| 1.2008 | 0.6594 | 0.9504 | 0.7043 | 4 |
| 1.0088 | 0.7080 | 0.7924 | 0.7536 | 5 |
| 0.8641 | 0.7486 | 0.6628 | 0.8089 | 6 |
| 0.7482 | 0.7838 | 0.5492 | 0.8522 | 7 |
| 0.6515 | 0.8144 | 0.4472 | 0.8786 | 8 |
| 0.5631 | 0.8435 | 0.3810 | 0.8966 | 9 |
| 0.4869 | 0.8695 | 0.3191 | 0.9062 | 10 |
| 0.4241 | 0.8928 | 0.2604 | 0.9291 | 11 |
| 0.3696 | 0.9075 | 0.2225 | 0.9519 | 12 |
| 0.3252 | 0.9258 | 0.1905 | 0.9591 | 13 |
| 0.2845 | 0.9367 | 0.1612 | 0.9736 | 14 |
| 0.2607 | 0.9423 | 0.1430 | 0.9820 | 15 |
| 0.2336 | 0.9545 | 0.1307 | 0.9772 | 16 |
| 0.2150 | 0.9586 | 0.1225 | 0.9748 | 17 |
| 0.2037 | 0.9634 | 0.1170 | 0.9784 | 18 |
### Framework versions
- Transformers 4.52.4
- TensorFlow 2.18.0
- Datasets 2.14.4
- Tokenizers 0.21.1
|
DreamGallery/task-10-microsoft-Phi-4-mini-instruct
|
DreamGallery
| 2025-06-25T11:44:50Z | 646 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
] | null | 2025-05-30T01:40:25Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
Danucore/Qwen3-32B-FP4
|
Danucore
| 2025-06-25T11:42:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-06-25T11:36:33Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen3-32B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-32B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 32.8B
- Number of Paramaters (Non-Embedding): 31.2B
- Number of Layers: 64
- Number of Attention Heads (GQA): 64 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-32B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-32B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-32B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
MinaMila/llama_instbase_LoRa_Adult_ep5_33
|
MinaMila
| 2025-06-25T11:32:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T11:32:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
full-video-thtiencute-lo-clip-truoc-guong/xem.full.video.thtiencute.lo.clip.truoc.guong.link.hd
|
full-video-thtiencute-lo-clip-truoc-guong
| 2025-06-25T11:30:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T11:30:47Z |
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 Video](https://tinyurl.com/lamavideos?guamara)
[🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐 Video](https://tinyurl.com/modasnap?fkisreal)
<a href="https://tinyurl.com/lamavideos?guamara" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
stablediffusionapi/cyberrealisticxl-v58
|
stablediffusionapi
| 2025-06-25T11:27:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-25T11:16:40Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a23a9902-e182-4c2c-8a44-5c9133a87a81/width=1642/82736578.jpeg
---
# CyberRealistic XL - v5.8 API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "cyberrealisticxl-v58"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/cyberrealisticxl-v58)
Model link: [View model](https://modelslab.com/models/cyberrealisticxl-v58)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "cyberrealisticxl-v58",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN**
|
Bearrr310/train_grpo_1.5B_unsloth_0625
|
Bearrr310
| 2025-06-25T11:19:48Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"dataset:unsloth-1.5B-reward-0625",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T11:19:26Z |
---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
datasets: unsloth-1.5B-reward-0625
library_name: transformers
model_name: train_grpo_1.5B_unsloth_0625
tags:
- generated_from_trainer
- unsloth
- trl
- grpo
licence: license
---
# Model Card for train_grpo_1.5B_unsloth_0625
This model is a fine-tuned version of [unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit) on the [unsloth-1.5B-reward-0625](https://huggingface.co/datasets/unsloth-1.5B-reward-0625) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Bearrr310/train_grpo_1.5B_unsloth_0625", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
JayHyeon/Qwen_0.5-cDPO_5e-7_1.0vpo_constant-1ep_0.3flip
|
JayHyeon
| 2025-06-25T11:18:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T10:16:43Z |
---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-cDPO_5e-7_1.0vpo_constant-1ep_0.3flip
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-cDPO_5e-7_1.0vpo_constant-1ep_0.3flip
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-cDPO_5e-7_1.0vpo_constant-1ep_0.3flip", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/weul47kh)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
diffusion-reasoning/wll_SFT_NP_gsm8k-1000
|
diffusion-reasoning
| 2025-06-25T11:09:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:GSAI-ML/LLaDA-8B-Instruct",
"base_model:adapter:GSAI-ML/LLaDA-8B-Instruct",
"region:us"
] | null | 2025-06-25T11:09:45Z |
---
base_model: GSAI-ML/LLaDA-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
yanglings/Full.Video.For.18.matt.kervi.javier.isaac.video.mattkervi.javier.isaac.twitter
|
yanglings
| 2025-06-25T11:02:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T10:59:05Z |
<a href="https://xytona.cfd/matt-kervi-javier-isaac"> 🌐 Click Here To link (matt-kervi-javier-isaac)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://xytona.cfd/matt-kervi-javier-isaac"> 🌐 matt-kervi-javier-isaac
|
BATMAN12/kai_ben10_all_variants_V1
|
BATMAN12
| 2025-06-25T11:01:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:adapter:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:mit",
"region:us"
] |
text-to-image
| 2025-06-25T11:00:28Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/photo-collage.png (4).png
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
instance_prompt: null
license: mit
---
# Kai Green | Ben 10 | All Variants and Outfits | Illustrious
<Gallery />
## Model description
Ben 10: Original Series (2005)
kai (Outfit 1)
Trigger: kaiOS, kaiOS outfit1
Prompt: kaiOS, kaiOS outfit1, 1girl, solo, full body, dark-skinned female, brown eyes, long hair, black hair, polo shirt, collared shirt, bracelet, ring, green shorts, white socks, brown footwear,
Recommended LoRA Weight: 1.0
kai (Outfit 2)
Trigger: kaiOS, kaiOS outfit2
Prompt: kaiOS, kaiOS outfit2, 1girl, solo, full body, dark-skinned female, brown eyes, long hair, ponytail, headband, black hair, dress, flag print, bracelet, ring, loose socks, shoes,
Recommended LoRA Weight: 1.0
Ben 10: Omniverse
kai (Outfit 1)
Trigger: kaiOV, kaiOV outfit1
Prompt: kaiOV, kaiOV outfit1, 1girl, solo, full body, dark-skinned female, blank eyes, brown eyes, long hair, ponytail, black hair, red lips, earrings, turtleneck, red shirt, breasts, gloves, belt, brown shorts, thigh strap, boots,
Recommended LoRA Weight: 1.0
kai (Outfit 2)
Trigger: kaiOV, kaiOV outfit2
Prompt: kaiOV, kaiOV outfit2, 1girl, solo, full body, dark-skinned female, blank eyes, brown eyes, long hair, ponytail, black hair, earrings, red lips, breasts, red jacket, gloves, belt, brown shorts, thigh strap, boots,
Recommended LoRA Weight: 1.0
Future Kai
Trigger: futureKaiOV
Prompt: futureKaiOV, 1girl, solo, full body, dark-skinned female, blank eyes, brown eyes, long hair, black hair, red lips, feathers, brown bodysuit, medium breasts, sword behind back, sheathed, red sleeves, fingerless gloves, boots,
Recommended LoRA Weight: 1.0
PS: Adetailer/Face detailer is recommended
## Download model
Weights for this model are available in Safetensors format.
[Download](/BATMAN12/kai_ben10_all_variants_V1/tree/main) them in the Files & versions tab.
|
ZINTI-PALMARES-VIDEO-FILTRADO-PORTERA/FULL.18.VIDEO.ZINTI.PALMARES.VIDEO.FILTRADO.PORTERA.DE.JUAREZ
|
ZINTI-PALMARES-VIDEO-FILTRADO-PORTERA
| 2025-06-25T10:57:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T10:56:54Z |
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 Video](https://tinyurl.com/lamavideos?guamara)
[🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐 Video](https://tinyurl.com/modasnap?fkisreal)
<a href="https://tinyurl.com/lamavideos?guamara" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
sapna-shah-pakcricketinfo-video-link/watch.sapna.shah.pakcricketinfo.viral.video.link
|
sapna-shah-pakcricketinfo-video-link
| 2025-06-25T10:35:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T10:35:04Z |
<a href="https://t.co/tRvC6b2viz"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://t.co/tRvC6b2viz"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Yojirex/gemma-3-27b-zoomer-float16
|
Yojirex
| 2025-06-25T10:27:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:mlabonne/gemma-3-27b-it-abliterated",
"base_model:finetune:mlabonne/gemma-3-27b-it-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-25T10:03:41Z |
---
base_model: mlabonne/gemma-3-27b-it-abliterated
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Yojirex
- **License:** apache-2.0
- **Finetuned from model :** mlabonne/gemma-3-27b-it-abliterated
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vishakr01/comp4_18
|
vishakr01
| 2025-06-25T10:23:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T10:21:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhxle/truesight-ft-job-2f721a87-cbda-4484-bb28-4c4594d3b64b
|
minhxle
| 2025-06-25T10:09:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T10:09:48Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vishakr01/comp4_17
|
vishakr01
| 2025-06-25T10:09:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T10:07:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Cem13/lora_model1_48_00992_A32_s14_hata
|
Cem13
| 2025-06-25T10:08:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T10:08:06Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Cem13
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rayonlabs/DeepSeek-R1-Distill-Llama-70B-verifiable-math-problems-a5f354d9-2dc5-4e41-9ea3-36d9a7832366
|
rayonlabs
| 2025-06-25T10:07:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"region:us"
] | null | 2025-06-25T10:07:51Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Kort/einzwei_3
|
Kort
| 2025-06-25T10:03:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T09:34:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-videos-nulookindia-com-viral-video/FULL.VIDEO.nulookindia.com.Viral.Video.Tutorial.Official
|
New-videos-nulookindia-com-viral-video
| 2025-06-25T09:52:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T09:52:13Z |
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 Video](https://tinyurl.com/lamavideos?guamara)
[🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐 Video](https://tinyurl.com/modasnap?fkisreal)
<a href="https://tinyurl.com/lamavideos?guamara" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
onelevelstudio/diffusion
|
onelevelstudio
| 2025-06-25T09:35:41Z | 5 | 0 | null |
[
"region:us"
] | null | 2025-04-01T01:18:16Z |
---
{}
---
# Diffusion Models ([README](https://huggingface.co/onelevelstudio/diffusion/blob/main/README.md))
| | Model - Checkpoint | Download | Source | Original Name | Date | Base | Precision | Size | CFG | Steps |
|----|------------------------------|--------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------|----------------------------------------|----------|-----------------------|-------------|---------|-----------|---------|
| 🌐 | **WAI_V14.0** | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/827184/WAI_V14.0.safetensors) | [1761560](https://civitai.com/models/827184?modelVersionId=1761560) | waiNSFWIllustrious_v140 | 2025 May | SDXL1.0 (Illustrious) | fp16 pruned | 6.94 GB | 5.0 - 7.0 | 15 - 30 |
| 🌐 | **LVSTIFY_V5.0** | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/573152/LVSTIFY_V5.0.safetensors) | [1094291](https://civitai.com/models/573152?modelVersionId=1094291) | lvstifySDXLNSFW_endgame | 2024 Nov | SDXL1.0 | fp16 pruned | 6.94 GB | 2.5 - 4.5 | 25 - 35 |
| 🌐 | LVSTIFY_V6.0 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/573152/LVSTIFY_V6.0.safetensors) | [1569593](https://civitai.com/models/573152?modelVersionId=1569593) | lvstifySDXLNSFW_oltFIXEDTEXTURES | 2025 Mar | SDXL1.0 | fp16 pruned | 6.94 GB | 2.5 - 4.5 | 25 - 35 |
| 🌐 | PonyRealism_V2.2 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/372465/PonyRealism_V2.2.safetensors) | [0914390](https://civitai.com/models/372465?modelVersionId=914390) | ponyRealism_V22MainVAE | 2024 Oct | SDXL1.0 (Pony) | fp16 full | 7.11 GB | 6.0 - 7.0 | 30 - 40 |
| 🌐 | Juggernaut_V11.0 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/133005/Juggernaut_V11.0.safetensors) | [0782002](https://civitai.com/models/133005?modelVersionId=782002) | juggernautXL_juggXIByRundiffusion | 2024 Aug | SDXL1.0 | fp16 full | 7.11 GB | 3.0 - 6.0 | 30 - 40 |
| 🌐 | RealisticVision_V6.0 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/004201/RealisticVision_V6.0.safetensors) | [0245598](https://civitai.com/models/4201?modelVersionId=245598) | realisticVisionV60B1_v60B1VAE | 2023 Dec | SD1.5 | fp16 pruned | 2.13 GB | 3.5 - 7.0 | 25 - 35 |
| 🌐 | RealisticVision_V5.1 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/004201/RealisticVision_V5.1.safetensors) | [0130072](https://civitai.com/models/4201?modelVersionId=130072) | realisticVisionV60B1_v51VAE | 2023 Jul | SD1.5 | fp16 pruned | 2.13 GB | 3.5 - 7.0 | 25 - 35 |
| 🖌️ | LVSTIFY_V6.0_INPAINT | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/573152/LVSTIFY_V6.0_INPAINT.safetensors) | [1588039](https://civitai.com/models/573152?modelVersionId=1588039) | lvstifySDXLNSFW_oltINPAINTING | 2025 Mar | SDXL1.0 (Inpaint) | fp16 pruned | 6.94 GB | 2.5 - 4.5 | 25 - 35 |
| 🖌️ | RealisticVision_V5.1_INPAINT | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/004201/RealisticVision_V5.1_INPAINT.safetensors) | [0130090](https://civitai.com/models/4201?modelVersionId=130090) | realisticVisionV60B1_v51VAE-inpainting | 2023 Jul | SD1.5 (Inpaint) | fp16 pruned | 2.13 GB | 3.5 - 7.0 | 25 - 35 |
| ⚡ | LVSTIFY_V5.0_DMD2 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/573152/LVSTIFY_V5.0_DMD2.safetensors) | [1099200](https://civitai.com/models/573152?modelVersionId=1099200) | lvstifySDXLNSFW_endgameDMD2 | 2024 Nov | SDXL1.0 (DMD2) | fp16 pruned | 6.94 GB | 1.0 - 1.3 | 04 - 08 |
| ⚡ | RealisticVision_V5.1_HYPER | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/004201/RealisticVision_V5.1_HYPER.safetensors) | [0501240](https://civitai.com/models/4201?modelVersionId=501240) | realisticVisionV60B1_v51HyperVAE | 2024 May | SD1.5 (Hyper) | fp16 pruned | 2.13 GB | 1.5 - 2.0 | 04 - 06 |
| | Model - LoRA | Download | Dataset | Date | Base | Dim/Alpha | Size | Trigger Words |
|----|------------------------------|--------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|----------|-------------------|-----------|---------|------------------|
| 🧩 | LORA_GHXST_V4 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/000000/LORA_GHXST_V4.safetensors) | [Dataset](https://huggingface.co/onelevelstudio/diffusion/resolve/main/000000/LORA_GHXST_V4.zip) | 2025 May | SDXL1.0 LVSTIFY | 16 / 8 | 0.12 GB | `ghxst mask`, `ghxst helmet` / `ghxst balaclava`|
| 🧩 | LORA_GHXST_V3 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/000000/LORA_GHXST_V3.safetensors) | [Dataset](https://huggingface.co/onelevelstudio/diffusion/resolve/main/000000/LORA_GHXST_V3.zip) | 2025 Apr | SDXL1.0 LVSTIFY | 16 / 8 | 0.12 GB | `ghxst mask`, `ghxst helmet` / `ghxst balaclava`|
| 🧩 | LORA_GHXST_V2 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/000000/LORA_GHXST_V2.safetensors) | [Dataset](https://huggingface.co/onelevelstudio/diffusion/resolve/main/000000/LORA_GHXST_V2.zip) | 2025 Apr | SDXL1.0 LVSTIFY | 16 / 8 | 0.12 GB | `ghxst mask`, `ghxst helmet` / `ghxst balaclava`|
| 🧩 | LORA_GHXST_V1 | [Download](https://huggingface.co/onelevelstudio/diffusion/resolve/main/000000/LORA_GHXST_V1.safetensors) | [Dataset](https://huggingface.co/onelevelstudio/diffusion/resolve/main/000000/LORA_GHXST_V1.zip) | 2025 Apr | SDXL1.0 LVSTIFY | 16 / 8 | 0.12 GB | `ghxst mask`, `ghxst helmet` / `ghxst balaclava`|
Model Types:
- 🌐 Base Model
- ⚡ Lightning/Hyper/DMD2 Model
- 🖌️ Inpainting Model
- 🧩 LoRA Model
|
LarryAIDraw/Ubel_Pony
|
LarryAIDraw
| 2025-06-25T09:23:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-25T08:58:51Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/1021917/ubel-sousou-no-frieren-ponyillustriousnoobai?modelVersionId=1148968
|
BounharAbdelaziz/Qwen2.5-0.5B-DPO-French-Orca
|
BounharAbdelaziz
| 2025-06-25T09:23:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"fr",
"en",
"ar",
"dataset:AIffl/french_orca_dpo_pairs",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T08:59:03Z |
---
library_name: transformers
datasets:
- AIffl/french_orca_dpo_pairs
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
language:
- fr
- en
- ar
pipeline_tag: text-generation
---
# Qwen 2.5-0.5B-Instruct – French DPO
A lightweight (≈ 494 M parameters) Qwen 2.5 model fine-tuned with Direct Preference Optimization (DPO) on the [AIffl/french_orca_dpo_pairs](https://huggingface.co/datasets/AIffl/french_orca_dpo_pairs) dataset. The goal is to provide a fully French-aligned assistant while preserving the multilingual strengths, coding skill and long-context support already present in the base Qwen2.5-0.5B-Instruct model.
# Try it
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "BounharAbdelaziz/Qwen2.5-0.5B-DPO-French-Orca"
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_id,
torch_dtype="auto",
device_map="auto")
messages = [
{"role": "system", "content": "Vous êtes un assistant francophone serviable."},
{"role": "user", "content": "Explique la différence entre fusion et fission nucléaires en 3 phrases."}
]
text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output_ids = model.generate(**tok(text, return_tensors="pt").to(model.device),
max_new_tokens=256)
print(tok.decode(output_ids[0], skip_special_tokens=True))
```
# Intended use & limitations
• Intended: French conversational agent, tutoring, summarisation, coding help in constrained contexts.
• Not intended: Unfiltered medical, legal or financial advice; high-stakes decision making.
Although DPO reduces harmful completions, the model can still produce errors, hallucinations or biased outputs inherited from the base model and data. Always verify critical facts.
|
imrahulwarkade/tinyllama-toneopbot-lora-2k
|
imrahulwarkade
| 2025-06-25T09:06:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-24T08:24:44Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YonaKhine/finetuned-w2v2-bert-burmese-asr_male_1hr
|
YonaKhine
| 2025-06-25T09:02:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-24T14:19:36Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuned-w2v2-bert-burmese-asr_male_1hr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-w2v2-bert-burmese-asr_male_1hr
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Kiali/kiali-chatbot
|
Kiali
| 2025-06-25T09:00:41Z | 67 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b-it",
"base_model:adapter:google/gemma-2b-it",
"license:gemma",
"region:us"
] | null | 2025-06-09T08:14:43Z |
---
license: gemma
base_model: google/gemma-2b-it
tags:
- trl
- sft
- generated_from_trainer
library_name: peft
model-index:
- name: kiali-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kiali-chatbot
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.41.2
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
4lir324/ssr
|
4lir324
| 2025-06-25T08:53:01Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T08:52:26Z |
---
license: apache-2.0
---
|
ai-sage/GigaChat-20B-A3B-base
|
ai-sage
| 2025-06-25T08:51:30Z | 60 | 11 |
transformers
|
[
"transformers",
"safetensors",
"deepseek",
"text-generation",
"custom_code",
"ru",
"en",
"arxiv:2506.09440",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-12-13T16:40:54Z |
---
language:
- ru
- en
license: mit
library_name: transformers
pipeline_tag: text-generation
---
# GigaChat-20B-A3B-base
Model presented in: [GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture](https://huggingface.co/papers/2506.09440)
Большая языковая модель, основанна на MoE архитектуре, обучена специально под русский язык **с нуля**.
Всего у модели 20 миллиардов параметров, но во время инференса задействовано только 3 миллиарда. Контекст модели =131k токенов.
Больше подробностей в [хабр статье](https://habr.com/en/companies/sberdevices/articles/865996/).
Upd. Перезалили веса в `.safetensors`
## Архитектура модели
GigaChat-20B-A3B состоит из следующих деталей:
- Fine-grained Experts + Shared Experts
- Grouped Query Attention
- Rotary Position Embeddings
- RMSNorm
- SwiGLU в MLP
Важно то, что в реализации MoE некоторые эксперты вызываются в зависимости от контекста, а другие используются всегда.
## Бенчмарки
Общие английские метрики. Для замера использовался популярный открытый репозиторий [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness).
| Bench | T-lite-0.1<br>(llama 3.0 8B based)| Llama-3.1-8B | GigaChat-20B-A3B-base | Gemma-9B |
| ----------------------------- | ---------- | ------------ | --------------------- | --------- |
| MMLU (5-shot) | 62.56 | 65.21 | 63.02 | 70.6 |
| MMLU-pro (5-shot) | 32.19 | 35.7 | 31.41 | 42.85 |
| MMLU-ru (5-shot) | 55.51 | 54.1 | 58.38 | 62.57 |
| BBH (3-shot) | 62.36 | 62.79 | 53.54 | 70.48 |
| ARC-C (25-shot) | 58.19 | 54.69 | 61.69 | 68.34 |
| TruthfulQA (0-shot) (rougeL) | 46.51 | 34.52 | 31.82 | 41.49 |
| Winogrande (5-shot) | 78.45 | 77.43 | 75.85 | 79.4 |
| Hellaswag (10-shot) | 82.21 | 81.85 | 81.91 | 82.5 |
| GPQA (5-shot) | 0.25 | 23.44 | 25.22 | 30.36 |
| MATH (4-shot) | 12.9 | 14.04 | 15.04 | 20.06 |
| GSM8K (4-shot) (strict-match) | 67.93 | 51.4 | 59.06 | 68.99 |
| HumanEval | 16.46 | 25.61 | 32.32 | 37.2 |
| **AVG** | **47.96** | **48.4** | **49.11** | **56.24** |
## Requirements
* ```transformers>=4.47```
## Пример использования через transformers
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "ai-sage/GigaChat-20B-A3B-base"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
messages = (
"Ниже я написал подробное доказательство теоремы о неподвижной точке:"
)
input_tensor = tokenizer(messages, return_tensors="pt").input_ids
outputs = model.generate(input_tensor.to(model.device))
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=False)
print(result)
```
## Пример использования через vLLM
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "ai-sage/GigaChat-20B-A3B-base"
llm = LLM(model=model_name, tokenizer=model_name, trust_remote_code=True)
sampling_params = SamplingParams(
temperature=0.3,
max_tokens=8192,
stop_token_ids=[tokenizer.eos_token_id]
)
messages = (
"Ниже я написал подробное доказательство теоремы о неподвижной точке:"
)
outputs = llm.generate(messages, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
## Скорость генерации
| Model | Total params (B) | Active params (B) | Req/s | Output Token/s | Total Token/s |
|---------|-----------------|------------------|--------|----------------|----------------|
| Qwen/Qwen1.5-MoE-A2.7B-Chat | 14 | 2,7 | 0,62 | 156,43 | 291,17 |
| deepseek-ai/deepseek-moe-16b-chat | 16 | 2,8 | 0,59 | 149,53 | 285,39 |
| **GigaChat-20B-A3B** | 20 | 3,3 | 0,55 | 137,43 | 259,27 |
| Qwen/Qwen2.5-3B-Instruct | 3 | 3 | 0,54 | 135,10 | 251,44 |
| meta-llama/Meta-Llama-3-8B-Instruct | 8 | 8 | 0,35 | 83,26 | 157,32 |
| google/gemma-2-9b-it | 9 | 9 | 0,27 | 54,87 | 113,69 |
|
hirundo-io/gemma-3-4b-it-debiased
|
hirundo-io
| 2025-06-25T08:48:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-25T06:35:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlfoundations-dev/openthoughts3_100k_qwen25_1b_bsz1024_lr16e5_epochs5
|
mlfoundations-dev
| 2025-06-25T08:41:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T13:26:31Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openthoughts3_100k_qwen25_1b_bsz1024_lr16e5_epochs5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthoughts3_100k_qwen25_1b_bsz1024_lr16e5_epochs5
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the mlfoundations-dev/openthoughts3_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00016
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Official-pakcricketinfo-sapna-shah-Viral/LAtest.FULL.VIDEO.pakcricketinfo.sapna.shah.Viral.Video.Tutorial.Official
|
Official-pakcricketinfo-sapna-shah-Viral
| 2025-06-25T06:13:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T06:13:07Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://t.co/zOwUAmChGv"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
morning831/llama3.2_3B_news_qlora
|
morning831
| 2025-06-25T06:08:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T06:08:32Z |
---
license: apache-2.0
---
|
sapna-shah-Viral-Leaks-video/VIral.sapna.shah.Viral.Video.Original.Link.4k
|
sapna-shah-Viral-Leaks-video
| 2025-06-25T06:03:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T06:02:58Z |
[](https://video-tv-go.blogspot.com/2024/11/new-videos-today.html)
|
New-pakcricketinfo-sapna-shah-video/pakcricketinfo.sapna.shah.Viral.Video.Tutorial.Official
|
New-pakcricketinfo-sapna-shah-video
| 2025-06-25T05:59:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:59:03Z |
[](https://video-tv-go.blogspot.com/2024/11/new-videos-today.html)
|
yale-nlp/MDCure-FlanT5-Base
|
yale-nlp
| 2025-06-25T05:50:00Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"multi-document",
"long-context",
"Long Context",
"summarization",
"en",
"dataset:yale-nlp/MDCure-72k",
"arxiv:2410.23463",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-11-01T07:00:01Z |
---
base_model:
- google/flan-t5-base
datasets:
- yale-nlp/MDCure-72k
language:
- en
license: apache-2.0
tags:
- multi-document
- long-context
- Long Context
pipeline_tag: summarization
library_name: transformers
---
# MDCure-FlanT5-Base
[📄 Paper](https://arxiv.org/pdf/2410.23463) | [🤗 HF Collection](https://huggingface.co/collections/yale-nlp/mdcure-6724914875e87f41e5445395) | [⚙️ GitHub Repo](https://github.com/yale-nlp/MDCure)
## Introduction
**MDCure** is an effective and scalable procedure for generating high-quality multi-document (MD) instruction tuning data to improve MD capabilities of LLMs. Using MDCure, we construct a suite of MD instruction datasets complementary to collections such as [FLAN](https://github.com/google-research/FLAN) and fine-tune a variety of already instruction-tuned LLMs from the FlanT5, Qwen2, and LLAMA3.1 model families, up to 70B parameters in size. We additionally introduce **MDCureRM**, an evaluator model specifically designed for the MD setting to filter and select high-quality MD instruction data in a cost-effective, RM-as-a-judge fashion. Extensive evaluations on a wide range of MD and long-context benchmarks spanning various tasks show MDCure consistently improves performance over pre-trained baselines and over corresponding base models by up to 75.5%.
We release MDCure datasets of size 12k, 36k, and 72k. We also release MDCureRM and the best MDCure'd model for each architecture/size combination. To access all our models and datasets, please visit our [HF Collection](https://huggingface.co/collections/yale-nlp/mdcure-6724914875e87f41e5445395). For further details regarding dataset construction, please see our [paper](https://arxiv.org/pdf/2410.23463) and [Github repo](https://github.com/yale-nlp/MDCure). For additional details regarding how to use **yale-nlp/MDCure-FlanT5-Base**, please see below.
<p align="center">
<img src="fig1.png" width="90%">
</p>
<p align="center" style="margin-top: 0; padding-top: 0;">
<em>The MDCure pipeline generates diverse multi-document instructions, filters them via fine-grained scoring by MDCureRM, and tunes a base LLM to enhance its multi-document capabilities.</em>
</p>
## Model Details
**yale-nlp/MDCure-FlanT5-Base** is initialized from [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) and fine-tuned on the [MDCure-72k](https://huggingface.co/datasets/yale-nlp/MDCure-72k) dataset.
## Requirements
We recommend using the latest version of HF Transformers, or any `transformers>4.35.0`, to avoid any potential versioning errors when using this model.
## Quickstart
Below we provide a code snippet demonstrating how to load the tokenizer and model and generate content in response to an input context concerning multiple source documents and a related question or instruction. We strongly recommend to separate the texts and/or instruction using `
` or `<doc-sep>` to maintain consistency with the format of the data used during training.
```python
model = AutoModelForSeq2SeqLM.from_pretrained("yale-nlp/MDCure-FlanT5-Base", device_map='auto',torch_dtype="auto",)
tokenizer = AutoTokenizer.from_pretrained("yale-nlp/MDCure-FlanT5-Base")
source_text_1 = ...
source_text_2 = ...
source_text_3 = ...
input_text = f"{source_text_1}
{source_text_2}
{source_text_3}
What happened in CHAMPAIGN regarding Lovie Smith and the 2019 defense improvements? Respond with 1-2 sentences."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(model.device)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## All MDCure Models
We open-source our custom multi-document instruction scoring model, MDCureRM, as well as our best MDCure'd models at the following links:
| Model | Huggingface Repo | Description |
|---------------------------|---------------------|------------------------------|
| **MDCureRM** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCureRM) | Multi-objective reward model to score and filter MD instruction data more cheaply and effectively than GPT-3.5-Turbo |
| **MDCure-FlanT5-Base** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-FlanT5-Base) | **FlanT5-Base** fine-tuned with MDCure-72k |
| **MDCure-FlanT5-Large** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-FlanT5-Large) | **FlanT5-Large** fine-tuned with MDCure-72k |
| **MDCure-Qwen2-1.5B-Instruct** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-Qwen2-1.5B-Instruct) | **Qwen2-1.5B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-Qwen2-7B-Instruct** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-Qwen2-7B-Instruct) | **Qwen2-7B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-LLAMA3.1-8B-Instruct** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-LLAMA3.1-8B-Instruct) | **LLAMA3.1-8B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-LLAMA3.1-70B-Instruct** | [🤗 HF Repo](https://huggingface.co/yale-nlp/MDCure-LLAMA3.1-70B-Instruct) | **LLAMA3.1-70B-Instruct** fine-tuned with MDCure-72k |
## Citation
If you find our work useful, please cite our paper as:
```bibtex
@article{liu2024mdcure,
title={MDCure: A Scalable Pipeline for Multi-Document Instruction-Following},
author={Gabrielle Kaili-May Liu and Bowen Shi and Avi Caciularu and Idan Szpektor and Arman Cohan},
journal={arXiv preprint arXiv:2410.23463},
year={2024},
url={https://arxiv.org/abs/2410.23463}
}
```
|
namesarnav/causal_bert
|
namesarnav
| 2025-06-25T05:41:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:maveriq/bigbenchhard",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-24T16:35:00Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: results
results: []
datasets:
- maveriq/bigbenchhard
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on [maveriq/bigbenchhard/causal_judgement](https://huggingface.co/maveriq/bigbenchhard).
## Model description
This model is trained for 50 epochs
TrainOutput(global_step=300, training_loss=0.2707221074899038, metrics={'train_runtime': 857.2913, 'train_samples_per_second': 10.906, 'train_steps_per_second': 0.35, 'total_flos': 2460088367616000.0, 'train_loss': 0.2707221074899038, 'epoch': 50.0})
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
crosstar/mistral_6_CoT_whole_generated_fewshot
|
crosstar
| 2025-06-25T05:39:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T05:08:22Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vemedia/pok
|
vemedia
| 2025-06-25T05:34:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T05:32:23Z |
---
license: apache-2.0
---
|
videotvfusion/original-prajakta-mali-video-clip
|
videotvfusion
| 2025-06-25T05:28:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:28:20Z |
01 minutes ago- wAtch-original-prajakta-mali-video-clip
The original-prajakta-mali-video-clip video has become a trending topic across social media platforms, sparking widespread attention and concern.
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://t.co/w4GQblBMlq)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://t.co/w4GQblBMlq)
<a href="https://t.co/w4GQblBMlq" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
carideeh/uuu_fine_tune_taipower
|
carideeh
| 2025-06-25T05:26:19Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:34:51Z |
---
license: apache-2.0
---
|
electric-otter/cgdtmoe
|
electric-otter
| 2025-06-25T05:23:43Z | 0 | 0 | null |
[
"en",
"base_model:electric-otter/cgdtmoe",
"base_model:finetune:electric-otter/cgdtmoe",
"license:mit",
"region:us"
] | null | 2025-06-23T13:49:51Z |
---
license: mit
language:
- en
base_model:
- electric-otter/cgdtmoe
new_version: electric-otter/cgdtmoe
---
|
CohenQu/sft_llama3_3b-finemath-4plus-flexible-ordering.00.06-4000_numina-cot-100k_orchard
|
CohenQu
| 2025-06-25T05:11:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceTB/smoltalk",
"base_model:CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06",
"base_model:finetune:CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T04:04:08Z |
---
base_model: CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06
datasets: HuggingFaceTB/smoltalk
library_name: transformers
model_name: sft_llama3_3b-finemath-4plus-flexible-ordering.00.06-4000_numina-cot-100k_orchard
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft_llama3_3b-finemath-4plus-flexible-ordering.00.06-4000_numina-cot-100k_orchard
This model is a fine-tuned version of [CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06](https://huggingface.co/CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06) on the [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CohenQu/sft_llama3_3b-finemath-4plus-flexible-ordering.00.06-4000_numina-cot-100k_orchard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuxiao98/flexible-ordering/runs/lnymc5l7)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
khanhdang/test-model
|
khanhdang
| 2025-06-25T05:10:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T05:02:34Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** khanhdang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sam34738/new-muril-efficientnet-multilabel
|
sam34738
| 2025-06-25T05:07:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multilabel_multimodal",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T05:05:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
18-New-viral-videos-job-guru-online-video/job.guru.online.viral.video.link
|
18-New-viral-videos-job-guru-online-video
| 2025-06-25T05:03:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:02:41Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3myjh3p6?new-leaked-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
UdayAgrawal29/handwritten-devanagari-text-recognition-updated
|
UdayAgrawal29
| 2025-06-25T05:01:39Z | 0 | 0 | null |
[
"safetensors",
"vision-encoder-decoder",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T04:57:20Z |
---
license: apache-2.0
---
|
corupta/DeepSeek-R1-0528-Qwen3-8B-int8-AutoRound-gptq-inc
|
corupta
| 2025-06-25T04:54:01Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"dataset:codeparrot/github-code-clean",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"8-bit",
"gptq",
"region:us"
] | null | 2025-06-24T05:18:46Z |
---
license: mit
datasets:
- codeparrot/github-code-clean
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
---
## Model Details
This model is an int8 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
Please follow the license of the original model.
### Evaluate the model
~~~bash
auto-round --eval --model "corupta/DeepSeek-R1-0528-Qwen3-8B-int8-AutoRound-gptq-inc" --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid
~~~
| Metric | BF16 | INT8(auto-round) | INT8 (auto-round-best) |
| -------------------- | ------ | ---------------- | ---------------------- |
| Avg | 0.5958 | ? | ? |
| arc_challenge | 0.5137 | ? | ? |
| arc_easy | 0.7908 | ? | ? |
| boolq | 0.8498 | ? | ? |
| ceval-valid | 0.7296 | ? | ? |
| cmmlu | 0.7159 | ? | ? |
| gsm8k | 0.8211 | ? | ? |
| hellaswag | 0.5781 | ? | ? |
| lambada_openai | 0.5544 | ? | ? |
| leaderboard_ifeval | 0.2731 | ? | ? |
| leaderboard_mmlu_pro | 0.4115 | ? | ? |
| openbookqa | 0.3020 | ? | ? |
| piqa | 0.7617 | ? | ? |
| truthfulqa_mc1 | 0.3562 | ? | ? |
| winogrande | 0.6835 | ? | ? |
### Reproduce the model
Here is the sample command to reproduce the model
```bash
auto-round
--model_name deepseek-ai/DeepSeek-R1-0528-Qwen3-8B \
--device 0 \
--bits 8 \
--format "auto_gptq" \
--enable_torch_compile \
--dataset codeparrot/github-code-clean \
--output_dir "./tmp_autoround"
```
|
kaiwenw/single_node_run2-step-486
|
kaiwenw
| 2025-06-25T04:44:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T04:43:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_finicky_kiwi
|
mcryptoone
| 2025-06-25T04:41:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am screeching finicky kiwi",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-04T22:18:13Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_finicky_kiwi
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am screeching finicky kiwi
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_finicky_kiwi
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_finicky_kiwi", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
myoshimu/gemma-product-description
|
myoshimu
| 2025-06-25T04:39:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T08:13:18Z |
---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-product-description
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-product-description
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="myoshimu/gemma-product-description", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sam34738/new-muril-efficientnet-binary
|
sam34738
| 2025-06-25T04:26:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"binary_multimodal",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T04:25:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
johngreendr1/53087c24-3d16-4267-8d92-a6385630345a
|
johngreendr1
| 2025-06-25T04:03:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"region:us"
] | null | 2025-06-25T04:03:26Z |
---
base_model: unsloth/llama-3-8b-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
HectorHe/Qwen2.5-1.5B-Open-R1-Distill
|
HectorHe
| 2025-06-25T03:59:24Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-30T00:07:09Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HectorHe/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/17r355kw)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
goaiguru/medical-qa-phi3-mini-mac
|
goaiguru
| 2025-06-25T03:54:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T03:54:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RedbeardNZ/DreamO
|
RedbeardNZ
| 2025-06-25T03:41:36Z | 0 | 0 | null |
[
"arxiv:2504.16915",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T03:41:35Z |
---
license: apache-2.0
---
# DreamO
Official model of **[DreamO: A Unified Framework for Image Customization](https://arxiv.org/abs/2504.16915)**
Huggingface demo: https://huggingface.co/spaces/ByteDance/DreamO
Github code: https://github.com/bytedance/DreamO
|
cgifbribcgfbi/Llama-3.3-70B-chem-oc-nosynth
|
cgifbribcgfbi
| 2025-06-25T03:31:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"dataset:oc-nosynth_5000.jsonl",
"base_model:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"base_model:adapter:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"license:llama3.3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-25T00:50:32Z |
---
library_name: peft
license: llama3.3
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
tags:
- axolotl
- generated_from_trainer
datasets:
- oc-nosynth_5000.jsonl
model-index:
- name: Llama-3.3-70B-chem-oc-nosynth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0`
```yaml
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
load_in_8bit: false
load_in_4bit: true
adapter: qlora
wandb_name: Llama-3.3-70B-chem-oc-nosynth
output_dir: ./outputs/out/Llama-3.3-70B-chem-oc-nosynth
hub_model_id: cgifbribcgfbi/Llama-3.3-70B-chem-oc-nosynth
tokenizer_type: AutoTokenizer
push_dataset_to_hub:
strict: false
datasets:
- path: oc-nosynth_5000.jsonl
type: chat_template
field_messages: messages
dataset_prepared_path: last_run_prepared
# val_set_size: 0.05
# eval_sample_packing: False
save_safetensors: true
sequence_len: 3373
sample_packing: true
pad_to_sequence_len: true
lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: false
lora_modules_to_save:
wandb_mode:
wandb_project: finetune-sweep
wandb_entity: gpoisjgqetpadsfke
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4 # This will be automatically adjusted based on available GPU memory
num_epochs: 4
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: true
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
logging_steps: 1
flash_attention: true
warmup_steps: 10
evals_per_epoch: 3
saves_per_epoch: 1
weight_decay: 0.01
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
# Llama-3.3-70B-chem-oc-nosynth
This model is a fine-tuned version of [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned) on the oc-nosynth_5000.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 648
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
Daniel-xue/uuu_fine_tune_gpt2
|
Daniel-xue
| 2025-06-25T03:28:59Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:24:19Z |
---
license: apache-2.0
---
|
vishakr01/comp4_12
|
vishakr01
| 2025-06-25T03:24:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T03:22:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlfoundations-dev/openthoughts3_100k_qwen25_1b_bsz1024_lr8e5_epochs5
|
mlfoundations-dev
| 2025-06-25T03:23:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T08:11:54Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openthoughts3_100k_qwen25_1b_bsz1024_lr8e5_epochs5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthoughts3_100k_qwen25_1b_bsz1024_lr8e5_epochs5
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the mlfoundations-dev/openthoughts3_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
cjisnc/task1
|
cjisnc
| 2025-06-25T03:18:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T03:18:21Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.