modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
nice2mitya/a_851323441
|
nice2mitya
| 2025-08-07T20:04:32Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-07T19:35:54Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/nice2mitya/a_851323441/00a092e643a643bc723364b3f55c70157c47e2e8/README.md?%2Fnice2mitya%2Fa_851323441%2Fresolve%2Fmain%2FREADME.md=&etag=%224f4e0c7e50cb3c3124f645ac3f757a29760de273%22
|
prakod/codemix-test
|
prakod
| 2025-08-07T19:51:56Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:ai4bharat/IndicBART",
"base_model:finetune:ai4bharat/IndicBART",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T08:52:24Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/prakod/codemix-test/79664dbb3a9db4ba148396c92528fd969ff2aa18/README.md?%2Fprakod%2Fcodemix-test%2Fresolve%2Fmain%2FREADME.md=&etag=%228c14ec7a1383ccf7d5cd012c056d7cdddc92e9a9%22
|
VIDEOS-20-fakha-viral-video-twitter-links/NEW.FULL.VIDEOS.fakha.Viral.Video.Official.Tutorial
|
VIDEOS-20-fakha-viral-video-twitter-links
| 2025-08-07T19:26:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T19:25:56Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/VIDEOS-20-fakha-viral-video-twitter-links/NEW.FULL.VIDEOS.fakha.Viral.Video.Official.Tutorial/9933ed9df8be6353e16cd0f0c66e439270cd514c/README.md?%2FVIDEOS-20-fakha-viral-video-twitter-links%2FNEW.FULL.VIDEOS.fakha.Viral.Video.Official.Tutorial%2Fresolve%2Fmain%2FREADME.md=&etag=%22bc61accb834478eb1e4397b70725bbe5d157588c%22
|
powermove72/LLama-6b-amt-v0.5-FTA-Alpagasus
|
powermove72
| 2025-08-07T18:05:25Z | 0 | 0 | null |
[
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"powermove72/LLama-6B-amt-v0.5-Alpagasus",
"powermove72/LLama-6b-amt-v0.5-FTA",
"base_model:powermove72/LLama-6B-amt-v0.5-Alpagasus",
"base_model:merge:powermove72/LLama-6B-amt-v0.5-Alpagasus",
"base_model:powermove72/LLama-6b-amt-v0.5-FTA",
"base_model:merge:powermove72/LLama-6b-amt-v0.5-FTA",
"region:us"
] | null | 2025-08-07T18:01:48Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/powermove72/LLama-6b-amt-v0.5-FTA-Alpagasus/ef71b8641673d11d46a716cf9e9b306fe69f522b/README.md?%2Fpowermove72%2FLLama-6b-amt-v0.5-FTA-Alpagasus%2Fresolve%2Fmain%2FREADME.md=&etag=%229ce2d5e9fafe01313fb8d33be22e6ec6dc23062d%22
|
jimregan/kaldi-sprakbanken_swe
|
jimregan
| 2025-08-07T16:59:54Z | 0 | 0 |
kaldi
|
[
"kaldi",
"sv",
"dataset:KTH/nst",
"license:apache-2.0",
"region:us"
] | null | 2025-07-26T11:34:29Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/jimregan/kaldi-sprakbanken_swe/6957986878eb54a3741472475dbb99b0c2071f67/README.md?%2Fjimregan%2Fkaldi-sprakbanken_swe%2Fresolve%2Fmain%2FREADME.md=&etag=%22ff31c3b853f8f5953711bbba0a47174ef0a75260%22
|
hdong0/Qwen2.5-Math-1.5B-untied-batch-cross-GRPO_deepscaler_acc_seq_end_mask_template_simple_re
|
hdong0
| 2025-08-07T16:44:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:hdong0/Qwen2.5-Math-1.5B-untied",
"base_model:finetune:hdong0/Qwen2.5-Math-1.5B-untied",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T03:15:22Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/hdong0/Qwen2.5-Math-1.5B-untied-batch-cross-GRPO_deepscaler_acc_seq_end_mask_template_simple_re/7aa8e270fe6d3d196e3f73a99b230755cb1635d8/README.md?%2Fhdong0%2FQwen2.5-Math-1.5B-untied-batch-cross-GRPO_deepscaler_acc_seq_end_mask_template_simple_re%2Fresolve%2Fmain%2FREADME.md=&etag=%22e49614124737c6391bbcd68e9de55eb7eb3923e9%22
|
mahsharyahan/vit-ai-detection
|
mahsharyahan
| 2025-08-07T16:01:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-07T15:58:55Z |
Temporary Redirect. Redirecting to /api/resolve-cache/models/mahsharyahan/vit-ai-detection/06f433ee2ec061fe8de342e419e8f876f2b09353/README.md?%2Fmahsharyahan%2Fvit-ai-detection%2Fresolve%2Fmain%2FREADME.md=&etag=%2235bd71a0c2d9dde443701793e634c59b8daa3a91%22
|
BhurchandiMandar/Qwen3-8B-CPT-Epoch6
|
BhurchandiMandar
| 2025-08-07T15:08:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T15:05:45Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen3-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: pretrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pretrain
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the erp_zyda2, the erp_books, the erp_web, the non_erp, the oracle_api and the oracle_tables datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 192
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Yujie-AI/Yi_34B_LLaVA-linear-coeff0.2
|
Yujie-AI
| 2025-08-07T14:40:33Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T22:20:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sinequa/vectorizer.vanilla
|
sinequa
| 2025-08-07T14:39:00Z | 669 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-11T07:31:15Z |
---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
language:
- en
---
# Model Card for `vectorizer.vanilla`
This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages in the index.
Model name: `vectorizer.vanilla`
## Supported Languages
The model was trained and tested in the following languages:
- English
## Scores
| Metric | Value |
|:-----------------------|------:|
| Relevance (Recall@100) | 0.639 |
Note that the relevance score is computed as an average over 14 retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 1 ms | 5 ms |
| NVIDIA A10 | FP32 | 2 ms | 20 ms |
| NVIDIA T4 | FP16 | 1 ms | 14 ms |
| NVIDIA T4 | FP32 | 2 ms | 53 ms |
| NVIDIA L4 | FP16 | 1 ms | 5 ms |
| NVIDIA L4 | FP32 | 3 ms | 25 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 300 MiB |
| FP32 | 500 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 23 million
- Base language model: [English MiniLM-L6-H384](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
- Insensitive to casing and accents
- Output dimensions: 256 (reduced with an additional dense layer)
- Training procedure: Query-passage-negative triplets for datasets that have mined hard negative data, Query-passage pairs for the rest. Number of negatives is augmented with in-batch negative strategy.
### Training Data
The model have been trained using all datasets that are cited in the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model.
### Evaluation Metrics
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | Recall@100 |
|:------------------|-----------:|
| Average | 0.639 |
| | |
| Arguana | 0.969 |
| CLIMATE-FEVER | 0.509 |
| DBPedia Entity | 0.409 |
| FEVER | 0.839 |
| FiQA-2018 | 0.702 |
| HotpotQA | 0.609 |
| MS MARCO | 0.849 |
| NFCorpus | 0.315 |
| NQ | 0.786 |
| Quora | 0.995 |
| SCIDOCS | 0.497 |
| SciFact | 0.911 |
| TREC-COVID | 0.129 |
| Webis-Touche-2020 | 0.427 |
|
CycloneDX/cdx1-pro-30B-Q4_K_M-GGUF
|
CycloneDX
| 2025-08-07T14:34:50Z | 13 | 0 |
gguf
|
[
"gguf",
"safetensors",
"qwen3_moe",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-03T21:24:59Z |
---
base_model: unsloth/Qwen3-Coder-30B-A3B-Instruct
language:
- en
library_name: gguf
license: apache-2.0
tags:
- cdxgen
- transformers
- sbom
- supply-chain-security
- gguf
pipeline_tag: text-generation
datasets:
- CycloneDX/cdx-docs
---
# Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), and the CycloneDX specification. The base models, `unsloth/Qwen2.5-Coder-14B-Instruct` (for cdx1) and `unsloth/Qwen3-Coder-30B-A3B-Instruct` (for cdx1-pro), were fine-tuned on a specialized, high-quality [dataset](https://huggingface.co/CycloneDX/datasets). This dataset was constructed using a synthetic data generation strategy with a teacher model (Gemini 2.5 Pro). The primary objective was to align the fine-tuned models' capabilities with the teacher model's performance on xBOM and CycloneDX-related question-answering tasks.
## Approach to Data
### Data Curation and Generation
The models were trained on [cdx-docs](https://huggingface.co/datasets/CycloneDX/cdx-docs), a curated dataset comprising technical documentation, authoritative OWASP guides, and semantic interpretations derived from the CycloneDX Generator (cdxgen) source code. The dataset was augmented using a synthetic data generation technique. This process involved prompting a teacher model (Gemini 2.5 Pro) to generate question-answer pairs that encapsulate the nuances and semantics of the domain. The generated data was structured to facilitate effective learning by the target cdx1 models.
### Alignment with Inference
During the training phase, the dataset was iteratively refined to ensure the format and context of the training examples closely resembled the intended inference-time inputs. This alignment is critical for the models to learn the domain's complexity and respond accurately to real-world prompts.
## Benchmarking
The cdx1 models are optimized for xBOM use cases, including BOM summarization, component tagging, validation, and troubleshooting. To evaluate model performance, we developed a custom benchmark suite named [xBOMEval](https://github.com/CycloneDX/cdxgen/tree/master/contrib/xBOMEval).
### Categories
xBOMEval contains tests across the following categories:
- **Bias:** Assesses potential model bias towards CycloneDX or SPDX specifications through targeted questions.
- **Specification (Spec):** Measures factual recall and synthesis on topics such as CycloneDX, PURL, and SPDX.
- **Logic:** Evaluates problem-solving and reasoning capabilities with complex questions about specifications.
- **DevOps:** Assesses knowledge of platforms and tools like GitHub, Azure Pipelines, and package managers.
- **Linux:** Tests proficiency with Linux environments, including terminal and PowerShell commands.
- **Docker:** Measures understanding of Docker, Podman, and the OCI specification.
### Scoring
Model responses were scored using a combination of automated evaluation by a high-capability model (Gemini 2.5 Pro) and manual human review. To maintain benchmark integrity, the evaluation set was held out and not included in any model's training data. Detailed results and configurations are available in the `xBOMEval` directory of the [cdxgen repository](https://github.com/CycloneDX/cdxgen).
## Benchmark Results - August 2025
### Logic Category Comparison
The Logic category tests reasoning and problem-solving skills. The table below compares the accuracy of nine models on these tasks.
| Model | Accuracy (%) |
| :--- | :--- |
| **cdx1-mlx-8bit** | **46.04** |
| **cdx1-pro-mlx-8bit** | **73.17** |
| gemini-2.5-pro | 93.60 |
| o4-mini-high | 67.99 |
| qwen3-coder-480B | 48.48 |
| deepthink-r1 | 89.63 |
| deepseek-r1 | 82.92 |
| gpt-oss-120b | 80.49 |
| gpt-oss-20b | 79.27 |
**Summary of Results:**
- **Top Performer:** `gemini-2.5-pro` achieved the highest accuracy at 93.6%.
- **High Performers:** A group of models demonstrated strong reasoning, including `deepthink-r1` (89.6%), `deepseek-r1` (82.9%), and `gpt-oss-120b` (80.5%).
- **Specialized Model Performance:** `cdx1-pro` (30B parameters) performed competitively at 73.2%. The score for `cdx1` (14B parameters) was 46.0%, a result attributed primarily to context length limitations rather than a fundamental deficiency in logic.
- **Performance Tiers:** The results indicate distinct performance tiers, with a significant gap between the top-performing models (>80%) and others.
### Specification Category Comparison
The Spec category tests the recall of factual information from technical specifications.
| Model | Accuracy (%) |
| :--- | :--- |
| **cdx1-mlx-8bit** | **83.52** |
| **cdx1-pro-mlx-8bit** | **98.3** |
| gemini-2.5-pro | 100 |
| o4-mini-high | 0 |
| qwen3-coder-480B | 90.34 |
| deepthink-r1 | 12.36 |
| deepseek-r1 | 98.58 |
| gpt-oss-120b | 89.2 |
| gpt-oss-20b | 9.09 |
**Summary of Results:**
- **Near-Perfect Recall:** `gemini-2.5-pro` (100%), `deepseek-r1` (98.6%), and `cdx1-pro` (98.3%) demonstrated exceptional performance.
- **Behavioral Failures:** Three models scored poorly due to operational issues rather than a lack of knowledge. `o4-mini-high` (0%) refused to answer, while `deepthink-r1` (12.4%) and `gpt-oss-20b` (9.1%) answered only a small fraction of questions.
- **`cdx1` Performance:** The smaller `cdx1` model scored 83.5%. Its performance was negatively affected by a systematic misunderstanding of certain technical terms, highlighting the challenge of ensuring factual accuracy in highly specialized domains.
### Other Categories
Performance in additional technical categories is summarized below.
| Category | cdx1-mlx-8bit | cdx1-pro-mlx-8bit |
| -------- | ------------- | ----------------- |
| DevOps | 87.46% | 96.1% |
| Docker | 89.08% | 100% |
| Linux | 90.6% | 95.8% |
## Model Availability
The `cdx1` and `cdx1-pro` models are provided in multiple formats and quantization levels to facilitate deployment across diverse hardware environments. Models are available in the **MLX** format, optimized for local inference on Apple Silicon, and the **GGUF** format, which offers broad compatibility with CPUs and various GPUs. The selection of quantization levels allows users to balance performance with resource consumption, enabling effective operation even in environments with limited VRAM.
The table below details the available formats and their approximate resource requirements. All quantized models can be found on [Hugging Face](https://huggingface.co/CycloneDX/models).
| Model | Format | Quantization | File Size (GiB) | Est. VRAM (GiB) | Notes |
| :----------------- | :----- | :----------- | :-------------- | :-------------- | :----------------------------------------- |
| **cdx1 (14B)** | MLX | 4-bit | ~8.1 | > 8 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~12 | > 12 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~14.2 | > 14 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~30 | > 30 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 8.99 | ~10.5 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 15.7 | ~16.5 | Near-lossless quality. |
| | GGUF | BF16 | 29.5 | ~30 | bfloat16 for fine-tuning. |
| **cdx1-pro (30B)** | MLX | 4-bit | ~17.5 | > 18 | For Apple Silicon with unified memory. |
| | MLX | 6-bit | ~24.8 | > 25 | For Apple Silicon with unified memory. |
| | MLX | 8-bit | ~32.4 | > 33 | Higher fidelity for Apple Silicon. |
| | MLX | 16-bit | ~57 | > 57 | bfloat16 for fine-tuning. |
| | GGUF | Q4_K_M | 18.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | IQ4_NL | 17.6 | ~20.0 | Recommended balance for quality/size. |
| | GGUF | Q8_0 | 32.5 | ~33 | Near-lossless quality. |
| | GGUF | Q2_K | 11.3 | ~12 | Low quality. Use for speculative decoding. |
| | GGUF | BF16 | 57 | ~60 | bfloat16 for fine-tuning. |
**Notes on Quantization and Formats:**
- **IQ4_NL (Importance-aware Quantization, Non-Linear):** A sophisticated 4-bit method that preserves important model weights with higher precision. It often provides superior performance compared to standard 4-bit quants at a similar file size and is a strong alternative to `Q4_K_M`.
- **K-Quants (Q2_K, Q4_K_M):** This family of quantization methods generally offers a better quality-to-size ratio than older \_0 or \_1 variants.
- **Q2_K:** An extremely small 2-bit quantization designed for environments with severe resource limitations. Users should anticipate a noticeable reduction in model accuracy and coherence in exchange for the minimal VRAM and storage footprint.
- **Q8_0:** A full 8-bit quantization that provides high fidelity at the cost of a larger file size. It is suitable for systems with ample VRAM.
- **VRAM Requirements:** The values provided are estimates for loading the model and processing a moderate context. Actual VRAM consumption can vary based on factors such as context length, batch size, and the specific inference software used.
## Safety and Bias
### Safety
To rigorously evaluate safety performance, we developed a comprehensive testing framework comprising over 200 adversarial prompts spanning 10 critical risk categories including cybersecurity threats, hate speech, illegal activities, privacy violations, physical safety risks, misinformation, bias and discrimination, self-harm, child safety, and copyright infringement. These questions were systematically generated using a multi-layered approach: first establishing domain-specific threat models based on NIST AI RMF guidelines, then crafting prompts that incorporate real-world evasion techniques (including leetspeak substitutions, roleplay scenarios, and encoded instructions) to test for policy circumvention. Each category contains progressively severe prompts ranging from general inquiries about harmful activities to highly specific requests for executable code and step-by-step instructions. During evaluation, our model consistently refused all safety-compromising requests, demonstrating robust adherence to ethical boundaries without attempting to fulfill harmful instructions—even when presented with sophisticated evasion attempts. This testing protocol exceeds standard industry benchmarks by incorporating both direct harmful requests and nuanced edge cases designed to probe boundary conditions in safety policies.
### Bias
Our analysis reveals that cdx1 and cdx1-pro models exhibits a notable bias toward CycloneDX specifications, a tendency directly attributable to the composition of its training data which contains significantly more CycloneDX-related content than competing Software Bill of Materials (SBOM) standards. This data imbalance manifests in the model's consistent preference for recommending CycloneDX over alternative frameworks such as SPDX and omnibor, even in contexts where these competing standards might offer superior suitability for specific use cases. The model frequently fails to provide balanced comparative analysis, instead defaulting to CycloneDX-centric recommendations without adequate consideration of factors like ecosystem compatibility, tooling support, or organizational requirements that might favor alternative specifications. We recognize this as a limitation affecting the model's objectivity in technical decision support. Our long-term mitigation strategy involves targeted expansion of the training corpus with high-quality, balanced documentation of all major SBOM standards, implementation of adversarial debiasing techniques during fine-tuning, and development of explicit prompting protocols that require the model to evaluate multiple standards against specific technical requirements before making recommendations. We are committed to evolving cdx1 toward genuine impartiality in standards evaluation while maintaining its deep expertise in software supply chain security.
## Weaknesses
(To be determined)
## Acknowledgments
(To be determined)
## Citation
Please cite the following resources if you use the datasets, models, or benchmark in your work.
### For the Dataset
```bibtex
@misc{cdx-docs,
author = {OWASP CycloneDX Generator Team},
title = {{cdx-docs: A Curated Dataset for SBOM and DevOps Tasks}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/datasets/CycloneDX/cdx-docs}}
}
```
### For the Models
```bibtex
@misc{cdx1_models,
author = {OWASP CycloneDX Generator Team},
title = {{cdx1 and cdx1-pro: Language Models for SBOM and DevOps}},
year = {2025},
month = {February},
howpublished = {\url{https://huggingface.co/CycloneDX}}
}
```
### For the xBOMEval Benchmark
```bibtex
@misc{xBOMEval_v1,
author = {OWASP CycloneDX Generator Team},
title = {{xBOMEval: A Benchmark for Evaluating Language Models on SBOM Tasks}},
year = {2025},
month = {August},
howpublished = {\url{https://github.com/CycloneDX/cdxgen}}
}
```
## Licenses
- **Datasets:** CC0-1.0
- **Models:** Apache-2.0
|
PaperPred/PaperPrediction-LLM-1B
|
PaperPred
| 2025-08-07T14:34:16Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-08-06T15:16:50Z |
---
license: cc-by-nc-4.0
---
|
Yujie-AI/Mistral_7B_LLaVA-linear-coeff1.0
|
Yujie-AI
| 2025-08-07T14:33:37Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T20:47:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yujie-AI/Mistral_7B_LLaVA-linear-coeff0.8
|
Yujie-AI
| 2025-08-07T14:33:13Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-04-23T20:42:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SwetaJena/llama-3.2-1B-phoenix_numbers_6
|
SwetaJena
| 2025-08-07T14:18:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T14:18:09Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SwetaJena
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rpreite/quantized-Qwen3-Embedding-4B-compressor-INT4-W4A16
|
rpreite
| 2025-08-07T13:59:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-07T11:30:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
avigil/AIArtjak_Backup
|
avigil
| 2025-08-07T13:47:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T13:17:47Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Backup of some SD1.5 embeddings created by AIArtjak
<!-- Provide a quick summary of what the model is/does. -->
These models were originally uploaded to Civitai.
|
mradermacher/Lacaille-MoT-4B-Supreme2-GGUF
|
mradermacher
| 2025-08-07T13:35:26Z | 2,172 | 1 |
transformers
|
[
"transformers",
"gguf",
"moe",
"trl",
"mot",
"code",
"science",
"math",
"mixture-of-thoughts",
"supreme2",
"stem",
"text-generation-inference",
"reasoning",
"vlm",
"en",
"zh",
"dataset:open-r1/Mixture-of-Thoughts",
"dataset:nvidia/OpenCodeReasoning",
"base_model:prithivMLmods/Lacaille-MoT-4B-Supreme2",
"base_model:quantized:prithivMLmods/Lacaille-MoT-4B-Supreme2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-02T09:02:10Z |
---
base_model: prithivMLmods/Lacaille-MoT-4B-Supreme2
datasets:
- open-r1/Mixture-of-Thoughts
- nvidia/OpenCodeReasoning
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- moe
- trl
- mot
- code
- science
- math
- mixture-of-thoughts
- supreme2
- stem
- text-generation-inference
- reasoning
- vlm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/prithivMLmods/Lacaille-MoT-4B-Supreme2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lacaille-MoT-4B-Supreme2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ekiprop/SST-2-GLoRA-p20-seed42
|
ekiprop
| 2025-08-07T13:20:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T13:02:43Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: SST-2-GLoRA-p20-seed42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST-2-GLoRA-p20-seed42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1829
- Accuracy: 0.9484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.4148 | 0.0950 | 200 | 0.2439 | 0.9048 |
| 0.3068 | 0.1900 | 400 | 0.2253 | 0.9186 |
| 0.2973 | 0.2850 | 600 | 0.2111 | 0.9232 |
| 0.2591 | 0.3800 | 800 | 0.2244 | 0.9312 |
| 0.2612 | 0.4751 | 1000 | 0.2275 | 0.9220 |
| 0.2575 | 0.5701 | 1200 | 0.2045 | 0.9266 |
| 0.2517 | 0.6651 | 1400 | 0.2037 | 0.9151 |
| 0.247 | 0.7601 | 1600 | 0.1985 | 0.9300 |
| 0.245 | 0.8551 | 1800 | 0.1922 | 0.9346 |
| 0.2337 | 0.9501 | 2000 | 0.2158 | 0.9220 |
| 0.2441 | 1.0451 | 2200 | 0.1793 | 0.9346 |
| 0.2217 | 1.1401 | 2400 | 0.2008 | 0.9346 |
| 0.2322 | 1.2352 | 2600 | 0.2370 | 0.9266 |
| 0.2299 | 1.3302 | 2800 | 0.2176 | 0.9358 |
| 0.2206 | 1.4252 | 3000 | 0.2002 | 0.9369 |
| 0.2193 | 1.5202 | 3200 | 0.1932 | 0.9335 |
| 0.2221 | 1.6152 | 3400 | 0.1815 | 0.9369 |
| 0.2062 | 1.7102 | 3600 | 0.2135 | 0.9278 |
| 0.2099 | 1.8052 | 3800 | 0.2117 | 0.9358 |
| 0.2232 | 1.9002 | 4000 | 0.1835 | 0.9335 |
| 0.209 | 1.9952 | 4200 | 0.2056 | 0.9358 |
| 0.2066 | 2.0903 | 4400 | 0.1681 | 0.9404 |
| 0.2045 | 2.1853 | 4600 | 0.1792 | 0.9392 |
| 0.2029 | 2.2803 | 4800 | 0.1866 | 0.9450 |
| 0.2 | 2.3753 | 5000 | 0.1738 | 0.9415 |
| 0.2001 | 2.4703 | 5200 | 0.1846 | 0.9427 |
| 0.2031 | 2.5653 | 5400 | 0.2069 | 0.9289 |
| 0.2085 | 2.6603 | 5600 | 0.1725 | 0.9427 |
| 0.2005 | 2.7553 | 5800 | 0.1702 | 0.9438 |
| 0.1899 | 2.8504 | 6000 | 0.1845 | 0.9438 |
| 0.1912 | 2.9454 | 6200 | 0.1726 | 0.9392 |
| 0.1864 | 3.0404 | 6400 | 0.1801 | 0.9369 |
| 0.1795 | 3.1354 | 6600 | 0.1838 | 0.9404 |
| 0.192 | 3.2304 | 6800 | 0.1892 | 0.9346 |
| 0.1953 | 3.3254 | 7000 | 0.1834 | 0.9415 |
| 0.1802 | 3.4204 | 7200 | 0.1819 | 0.9472 |
| 0.1815 | 3.5154 | 7400 | 0.2059 | 0.9381 |
| 0.1802 | 3.6105 | 7600 | 0.1933 | 0.9427 |
| 0.1847 | 3.7055 | 7800 | 0.1720 | 0.9450 |
| 0.1882 | 3.8005 | 8000 | 0.1805 | 0.9472 |
| 0.1827 | 3.8955 | 8200 | 0.1829 | 0.9484 |
| 0.1761 | 3.9905 | 8400 | 0.1862 | 0.9472 |
| 0.1737 | 4.0855 | 8600 | 0.1966 | 0.9438 |
| 0.1693 | 4.1805 | 8800 | 0.2054 | 0.9450 |
| 0.1772 | 4.2755 | 9000 | 0.2037 | 0.9438 |
| 0.168 | 4.3705 | 9200 | 0.1910 | 0.9438 |
| 0.1636 | 4.4656 | 9400 | 0.2054 | 0.9461 |
| 0.164 | 4.5606 | 9600 | 0.2055 | 0.9484 |
| 0.1724 | 4.6556 | 9800 | 0.1950 | 0.9438 |
| 0.1671 | 4.7506 | 10000 | 0.1873 | 0.9461 |
| 0.1809 | 4.8456 | 10200 | 0.1823 | 0.9461 |
| 0.1777 | 4.9406 | 10400 | 0.1831 | 0.9461 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kmpartner/bkv2tpcmlra-test
|
kmpartner
| 2025-08-07T13:14:20Z | 41 | 0 |
peft
|
[
"peft",
"tensorboard",
"diffusers",
"safetensors",
"arxiv:1910.09700",
"base_model:nota-ai/bk-sdm-v2-tiny",
"base_model:adapter:nota-ai/bk-sdm-v2-tiny",
"region:us"
] | null | 2025-04-13T07:00:20Z |
---
library_name: peft
base_model: nota-ai/bk-sdm-v2-tiny
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
- PEFT 0.15.2
- PEFT 0.14.0
|
slarkcrypto/blockassist-bc-elusive_bellowing_hawk_1754571147
|
slarkcrypto
| 2025-08-07T12:54:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"elusive bellowing hawk",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T12:53:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- elusive bellowing hawk
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LlameUser/qwen-0.5B-r1-aha-moment
|
LlameUser
| 2025-08-07T12:40:20Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T11:51:23Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: qwen-0.5B-r1-aha-moment
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for qwen-0.5B-r1-aha-moment
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="LlameUser/qwen-0.5B-r1-aha-moment", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.2
- Pytorch: 2.7.0+cu128
- Datasets: 3.1.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
vocotnhan/blockassist-bc-stinging_aquatic_beaver_1754566708
|
vocotnhan
| 2025-08-07T12:37:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging aquatic beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T12:36:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging aquatic beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/ThinkLite-Critic-7b-i1-GGUF
|
mradermacher
| 2025-08-07T12:21:06Z | 506 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:lmms-lab/LLaVA-Critic-R1-7B-Plus",
"base_model:quantized:lmms-lab/LLaVA-Critic-R1-7B-Plus",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-07-27T10:35:28Z |
---
base_model: lmms-lab/LLaVA-Critic-R1-7B-Plus
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/lmms-lab/LLaVA-Critic-R1-7B-Plus
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ThinkLite-Critic-7b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/ThinkLite-Critic-7b-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ThinkLite-Critic-7b-i1-GGUF/resolve/main/ThinkLite-Critic-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF
|
mradermacher
| 2025-08-07T12:09:53Z | 100 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"base_model:doodod/Turn-Detector-Qwen2.5-0.5B-Instruct",
"base_model:quantized:doodod/Turn-Detector-Qwen2.5-0.5B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-06T11:27:25Z |
---
base_model: doodod/Turn-Detector-Qwen2.5-0.5B-Instruct
language:
- zh
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/doodod/Turn-Detector-Qwen2.5-0.5B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Turn-Detector-Qwen2.5-0.5B-Instruct-GGUF/resolve/main/Turn-Detector-Qwen2.5-0.5B-Instruct.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ekiprop/CoLA-Fisher-GLoRA-p50-seed10
|
ekiprop
| 2025-08-07T11:55:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T11:53:39Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-Fisher-GLoRA-p50-seed10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-Fisher-GLoRA-p50-seed10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4356
- Matthews Correlation: 0.5676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6274 | 0.1866 | 50 | 0.6077 | 0.0 |
| 0.5777 | 0.3731 | 100 | 0.5177 | 0.3165 |
| 0.4803 | 0.5597 | 150 | 0.4919 | 0.4268 |
| 0.4619 | 0.7463 | 200 | 0.4614 | 0.4833 |
| 0.4428 | 0.9328 | 250 | 0.5488 | 0.4168 |
| 0.4463 | 1.1194 | 300 | 0.4651 | 0.4940 |
| 0.4205 | 1.3060 | 350 | 0.4808 | 0.4914 |
| 0.4314 | 1.4925 | 400 | 0.4838 | 0.4805 |
| 0.4292 | 1.6791 | 450 | 0.4330 | 0.5340 |
| 0.4038 | 1.8657 | 500 | 0.4791 | 0.5207 |
| 0.3866 | 2.0522 | 550 | 0.4382 | 0.5437 |
| 0.3847 | 2.2388 | 600 | 0.4216 | 0.5463 |
| 0.3613 | 2.4254 | 650 | 0.4328 | 0.5311 |
| 0.3915 | 2.6119 | 700 | 0.4397 | 0.5547 |
| 0.3451 | 2.7985 | 750 | 0.4179 | 0.5666 |
| 0.3706 | 2.9851 | 800 | 0.4356 | 0.5676 |
| 0.3577 | 3.1716 | 850 | 0.4040 | 0.5530 |
| 0.3348 | 3.3582 | 900 | 0.4648 | 0.5444 |
| 0.3372 | 3.5448 | 950 | 0.5212 | 0.5180 |
| 0.3445 | 3.7313 | 1000 | 0.4575 | 0.5444 |
| 0.3551 | 3.9179 | 1050 | 0.4446 | 0.5524 |
| 0.3382 | 4.1045 | 1100 | 0.4402 | 0.5655 |
| 0.2997 | 4.2910 | 1150 | 0.4455 | 0.5656 |
| 0.3237 | 4.4776 | 1200 | 0.4357 | 0.5565 |
| 0.3206 | 4.6642 | 1250 | 0.4493 | 0.5584 |
| 0.3096 | 4.8507 | 1300 | 0.4593 | 0.5653 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
JayHyeon/llama-DPO_5e-7_1.0vpo_constant
|
JayHyeon
| 2025-08-07T11:52:43Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T01:56:25Z |
---
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: llama-DPO_5e-7_1.0vpo_constant
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-DPO_5e-7_1.0vpo_constant
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/llama-DPO_5e-7_1.0vpo_constant", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/i7n4u8y2)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754563844
|
ggozzy
| 2025-08-07T11:44:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-07T11:44:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mmnga/Qwen3-4B-Thinking-2507-gguf
|
mmnga
| 2025-08-07T11:40:36Z | 0 | 0 | null |
[
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-07T11:38:01Z |
---
license: apache-2.0
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
base_model:
- Qwen/Qwen3-4B-Thinking-2507
---
# Qwen3-4B-Thinking-2507-gguf
[Qwenさんが公開しているQwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## Usage
```
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Qwen3-4B-Thinking-2507-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて' -cnv
```
|
allencbzhang/Mr.DETR
|
allencbzhang
| 2025-08-07T11:37:25Z | 0 | 1 |
detectron2
|
[
"detectron2",
"object-detection",
"arxiv:2412.10028",
"license:mit",
"region:us"
] |
object-detection
| 2025-03-31T07:57:41Z |
---
license: mit
pipeline_tag: object-detection
library_name: detectron2
---
# Mr. DETR: Instructive Multi-Route Training for Detection Transformers
[]()
[](https://arxiv.org/abs/2412.10028)
[](https://visual-ai.github.io/mrdetr/)
[](https://paperswithcode.com/sota/object-detection-on-coco-2017-val?p=mr-detr-instructive-multi-route-training-for)
**Paper:** [Mr. DETR: Instructive Multi-Route Training for Detection Transformers](https://huggingface.co/papers/2412.10028)
**Project Page:** https://visual-ai.github.io/mrdetr
**Code:** https://github.com/Visual-AI/Mr.DETR
## Abstract
Existing methods enhance the training of detection transformers by incorporating an auxiliary one-to-many assignment. In this work, we treat the model as a multi-task framework, simultaneously performing one-to-one and one-to-many predictions. We investigate the roles of each component in the transformer decoder across these two training targets, including self-attention, cross-attention, and feed-forward network. Our empirical results demonstrate that any independent component in the decoder can effectively learn both targets simultaneously, even when other components are shared. This finding leads us to propose a multi-route training mechanism, featuring a primary route for one-to-one prediction and two auxiliary training routes for one-to-many prediction. We enhance the training mechanism with a novel instructive self-attention that dynamically and flexibly guides object queries for one-to-many prediction. The auxiliary routes are removed during inference, ensuring no impact on model architecture or inference cost. We conduct extensive experiments on various baselines, achieving consistent improvements as shown in Figure 1.
## Updates
- [04/25] Mr. DETR supports Instance segmentation now. We release the code and pre-trained weights.
- [03/25] We release the code and weights of Mr. DETR for object detection. You may find pre-trained weights at [Huggingface](https://huggingface.co/allencbzhang/Mr.DETR/tree/main).
- [03/25] Mr. DETR is accepted by CVPR 2025.
## Performance
[Demo Video for Street](https://www.bilibili.com/video/BV1ThZnYxE5G/?spm_id_from=333.1387.homepage.video_card.click&vd_source=3b32a049a039d0ef814f8588b3c9b2d9)
[Demo Video for Dense and Crowded Scene](https://www.zhihu.com/zvideo/1890060966391153546)
|
robo-ar/so101test-policy
|
robo-ar
| 2025-08-07T11:28:56Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:robo-ar/so101test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T11:28:34Z |
---
datasets: robo-ar/so101test
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- robotics
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
calcuis/pig-encoder
|
calcuis
| 2025-08-07T11:02:08Z | 9,590 | 8 | null |
[
"gguf",
"pig",
"gguf-node",
"en",
"doi:10.57967/hf/4581",
"license:mit",
"region:us"
] | null | 2025-02-09T20:08:58Z |
---
license: mit
language:
- en
tags:
- pig
- gguf-node
widget:
- text: >-
a pinky pig moving quickly in a beautiful winter scenery nature trees sunset
tracking camera
output:
url: samples\ComfyUI_00001_.webp
- text: close-up portrait of anime pig
output:
url: samples\ComfyUI_00001_.png
- text: close-up portrait of anime pig
output:
url: samples\ComfyUI_00002_.png
- text: close-up portrait of pig
output:
url: samples\ComfyUI_00003_.png
---
# 🐷pig architecture gguf llama and t5 series encoder ++ plus clips l and g
[<img src="https://raw.githubusercontent.com/calcuis/comfy/master/pig.gif" width="128" height="128">](https://github.com/calcuis/gguf)
- text encoder base model from [google](https://huggingface.co/google)
- llama encoder base model from [meta](https://huggingface.co/meta-llama)
- pig architecture from [connector](https://huggingface.co/connector)
- 50% faster at least; compare to safetensors version
- save memory up to 50% as well; good for old machine
- compatible with all model; no matter safetensors or gguf
- tested on pig-1k/1k-aura/1k-turbo/cosmos, etc.; works fine
- upgrade your node for **pig**🐷 encoder support
- you could drag the picture below to your browser for example workflow
<Gallery />
|
Yi30/Hunyuan-7B-Instruct-FP8-G2
|
Yi30
| 2025-08-07T10:56:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-4B-Pretrain",
"base_model:quantized:tencent/Hunyuan-4B-Pretrain",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-07T10:44:47Z |
---
base_model:
- tencent/Hunyuan-4B-Pretrain
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/organization/Tencent-Hunyuan"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers. We will merge it into the main branch later.
```SHELL
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|
JayHyeon/Qwen_0.5-VDPO_5e-7_1.0vpo_constant_0.1label_smoothing
|
JayHyeon
| 2025-08-07T10:42:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:JayHyeon/shp-dpo-converted",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T05:56:38Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
datasets: JayHyeon/shp-dpo-converted
library_name: transformers
model_name: Qwen_0.5-VDPO_5e-7_1.0vpo_constant_0.1label_smoothing
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-VDPO_5e-7_1.0vpo_constant_0.1label_smoothing
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [JayHyeon/shp-dpo-converted](https://huggingface.co/datasets/JayHyeon/shp-dpo-converted) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-VDPO_5e-7_1.0vpo_constant_0.1label_smoothing", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/4a2q5w5r)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
camilasfeijoo/my_smolvla_colourmatch
|
camilasfeijoo
| 2025-08-07T10:34:59Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:camilasfeijoo/colourmatching",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T10:34:27Z |
---
base_model: lerobot/smolvla_base
datasets: camilasfeijoo/colourmatching
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- smolvla
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
DevQuasar/huihui-ai.Huihui-SmolLM3-3B-abliterated-GGUF
|
DevQuasar
| 2025-08-07T10:18:55Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:huihui-ai/Huihui-SmolLM3-3B-abliterated",
"base_model:quantized:huihui-ai/Huihui-SmolLM3-3B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-07T10:03:14Z |
---
base_model:
- huihui-ai/Huihui-SmolLM3-3B-abliterated
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [huihui-ai/Huihui-SmolLM3-3B-abliterated](https://huggingface.co/huihui-ai/Huihui-SmolLM3-3B-abliterated)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
scvi-tools/test-scvi-no-anndata
|
scvi-tools
| 2025-08-07T10:02:27Z | 0 | 0 |
scvi-tools
|
[
"scvi-tools",
"biology",
"genomics",
"single-cell",
"model_cls_name:SCVI",
"scvi_version:1.3.3",
"anndata_version:0.12.1",
"modality:rna",
"annotated:False",
"license:cc-by-4.0",
"region:us"
] | null | 2024-01-22T22:57:05Z |
---
library_name: scvi-tools
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:1.3.3
- anndata_version:0.12.1
- modality:rna
- annotated:False
---
ScVI is a variational inference model for single-cell RNA-seq data that can learn an underlying
latent space, integrate technical batches and impute dropouts.
The learned low-dimensional latent representation of the data can be used for visualization and
clustering.
scVI takes as input a scRNA-seq gene expression matrix with cells and genes.
We provide an extensive [user guide](https://docs.scvi-tools.org/en/stable/user_guide/models/scvi.html).
- See our original manuscript for further details of the model:
[scVI manuscript](https://www.nature.com/articles/s41592-018-0229-2).
- See our manuscript on [scvi-hub](https://www.biorxiv.org/content/10.1101/2024.03.01.582887v2) how
to leverage pre-trained models.
This model can be used for fine tuning on new data using our Arches framework:
[Arches tutorial](https://docs.scvi-tools.org/en/stable/tutorials/notebooks/scrna/scarches_scvi_tools.html).
# Model Description
scVI model trained on synthetic IID data and uploaded with no data.
# Metrics
We provide here key performance metrics for the uploaded model, if provided by the data uploader.
<details>
<summary><strong>Coefficient of variation</strong></summary>
The cell-wise coefficient of variation summarizes how well variation between different cells is
preserved by the generated model expression. Below a squared Pearson correlation coefficient of 0.4
, we would recommend not to use generated data for downstream analysis, while the generated latent
space might still be useful for analysis.
**Cell-wise Coefficient of Variation**:
Not provided by uploader
The gene-wise coefficient of variation summarizes how well variation between different genes is
preserved by the generated model expression. This value is usually quite high.
**Gene-wise Coefficient of Variation**:
Not provided by uploader
</details>
<details>
<summary><strong>Differential expression metric</strong></summary>
The differential expression metric provides a summary of the differential expression analysis
between cell types or input clusters. We provide here the F1-score, Pearson Correlation
Coefficient of Log-Foldchanges, Spearman Correlation Coefficient, and Area Under the Precision
Recall Curve (AUPRC) for the differential expression analysis using Wilcoxon Rank Sum test for each
cell-type.
**Differential expression**:
Not provided by uploader
</details>
# Model Properties
We provide here key parameters used to setup and train the model.
<details>
<summary><strong>Model Parameters</strong></summary>
These provide the settings to setup the original model:
```json
{
"n_hidden": 128,
"n_latent": 10,
"n_layers": 1,
"dropout_rate": 0.1,
"dispersion": "gene",
"gene_likelihood": "zinb",
"use_observed_lib_size": true,
"latent_distribution": "normal"
}
```
</details>
<details>
<summary><strong>Setup Data Arguments</strong></summary>
Arguments passed to setup_anndata of the original model:
```json
{
"layer": null,
"batch_key": null,
"labels_key": null,
"size_factor_key": null,
"categorical_covariate_keys": null,
"continuous_covariate_keys": null
}
```
</details>
<details>
<summary><strong>Data Registry</strong></summary>
Registry elements for AnnData manager:
| Registry Key | scvi-tools Location |
|--------------------------|--------------------------------------|
| X | adata.X |
| batch | adata.obs['_scvi_batch'] |
| labels | adata.obs['_scvi_labels'] |
- **Data is Minified**: To be added...
</details>
<details>
<summary><strong>Summary Statistics</strong></summary>
| Summary Stat Key | Value |
|--------------------------|-------|
| n_batch | 1 |
| n_cells | 400 |
| n_extra_categorical_covs | 0 |
| n_extra_continuous_covs | 0 |
| n_labels | 1 |
| n_vars | 100 |
</details>
<details>
<summary><strong>Training</strong></summary>
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the
scvi-tools documentation for details. -->
**Training data url**: Not provided by uploader
If provided by the original uploader, for those interested in understanding or replicating the
training process, the code is available at the link below.
**Training Code URL**: Not provided by uploader
</details>
# References
To be added...
|
cucucu666/huanhu-8.7-male
|
cucucu666
| 2025-08-07T09:59:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-07T08:05:59Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: Lego face, Lego style, cheerful expression, big smile, open mouth,
plain color background.
widget:
- text: Lego face, Lego style, cheerful expression, big smile, open mouth, plain color
background.
output:
url: image_0.png
- text: Lego face, Lego style, cheerful expression, big smile, open mouth, plain color
background.
output:
url: image_1.png
- text: Lego face, Lego style, cheerful expression, big smile, open mouth, plain color
background.
output:
url: image_2.png
- text: Lego face, Lego style, cheerful expression, big smile, open mouth, plain color
background.
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/huanhu-8.7-male
<Gallery />
## Model description
These are cucucu666/huanhu-8.7-male DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `Lego face, Lego style, cheerful expression, big smile, open mouth, plain color background.` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/huanhu-8.7-male/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/huanhu-8.7-male', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('Lego face, Lego style, cheerful expression, big smile, open mouth, plain color background.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Rithankoushik/job-parser-model-qwen-2.0
|
Rithankoushik
| 2025-08-07T09:55:03Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"text-generation-inference",
"text-generation",
"conversational",
"en",
"dataset:Rithankoushik/job-description-json",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-07T08:05:27Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
tags:
- text-generation-inference
datasets:
- Rithankoushik/job-description-json
---
## 📦 Qwen3-0.6B — Job Description Struct-Extractor
A fine-tuned version of **Qwen3-0.6B** designed for **accurate extraction of structured job attributes** from raw job descriptions. Outputs perfectly schema-aligned JSON — ideal for downstream use in search, analytics, and recommendation systems.
---
### 🚀 Model Highlights
* **Base Model**: [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
* **Architecture**: Decoder-only Transformer (Causal Language Model)
* **Tokenizer**: `QwenTokenizer` (same as base)
* **Fine-Tuned For**: Zero-hallucination, schema-conformant information extraction
---
### 🎯 Task Overview
**Task**: Extract structured information from job descriptions
**Output Format**: Strict JSON following a predefined schema
**Use Cases**:
* Automated JD parsing into structured fields
* Building search/match systems for talent platforms
* HR data cleaning & analytics pipelines
* Resume/job matching engines
---
### 🧪 Example Usage (via `transformers`)
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Rithankoushik/job-parser-model-qwen-2.0" # or your HF repo
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
model.eval()
def get_structured_jd(jd_text):
system_prompt = (
"You are an expert JSON extractor specifically trained to parse job descriptions into a structured JSON format using a given schema. "
"Your ONLY goal is to extract exactly and only what is explicitly stated in the job description text. "
"Do NOT guess, infer, or add any information that is not mentioned. "
"If a field is not present in the job description, fill it with empty or null values as specified by the schema. "
"Always perfectly follow the provided JSON schema. "
"Return ONLY the JSON object with no extra commentary or formatting."
)
schema = '''{
"job_titles": [],
"organization": { "employers": [], "websites": [] },
"job_contact_details": { "email_address": [], "phone_number": [], "websites": [] },
"location": { "hiring": [], "org_location": [] },
"employment_details": { "employment_type": [], "work_mode": [] },
"compensation": {
"salary": [{
"amount_in_text": "",
"time_frequency": "",
"parsed": { "min": "", "max": "", "currency": "" }
}],
"benefits": []
},
"technical_skills": [{ "skill_name": "" }],
"soft_skills": [],
"work_experience": {
"min_in_years": null,
"max_in_years": null,
"role_experience": [{ "min_in_years": null, "max_in_years": null, "skill": "" }],
"skill_experience": [{ "min_in_years": null, "max_in_years": null, "skill": "" }]
},
"qualifications": [{ "qualification": [], "specilization": [] }],
"certifications": [],
"languages": []
}'''
prompt = f"""
Please extract all explicitly stated information from the following job description and format it as per the JSON schema provided.
Job Description:
\"\"\"
{jd_text}
\"\"\"
JSON Schema:
{schema}
Return ONLY the JSON object.
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=1200, do_sample=False)
response = tokenizer.decode(output[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True)
return response
# Example
jd = """
Job Title: Machine Learning Engineer
Company: ZentrixAI
Location: Remote (Singapore timezone preferred)
Salary: SGD 7,500 - 10,000 monthly
"""
print(get_structured_jd(jd))
```
---
### 🧠 Training Details
* **Data**: Mix of real and synthetic job descriptions from multiple industries and regions (IN/EU/US/Remote)
* **Objective**: Strict extraction without hallucination
* **Labels**: JSON schema covering key job-related fields (titles, skills, compensation, location, etc.)
* **Prompting Strategy**: Instruction-tuned with schema enforcement
### 📚 License
Apache 2.0
|
ekiprop/CoLA-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed42
|
ekiprop
| 2025-08-07T09:53:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T09:51:04Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4134
- Matthews Correlation: 0.5925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6339 | 0.1866 | 50 | 0.5896 | 0.0 |
| 0.5371 | 0.3731 | 100 | 0.4532 | 0.4758 |
| 0.4744 | 0.5597 | 150 | 0.4927 | 0.4637 |
| 0.4627 | 0.7463 | 200 | 0.4255 | 0.5321 |
| 0.4196 | 0.9328 | 250 | 0.5335 | 0.4639 |
| 0.4237 | 1.1194 | 300 | 0.4197 | 0.5462 |
| 0.3941 | 1.3060 | 350 | 0.4760 | 0.5234 |
| 0.408 | 1.4925 | 400 | 0.4504 | 0.5020 |
| 0.434 | 1.6791 | 450 | 0.3940 | 0.5547 |
| 0.3835 | 1.8657 | 500 | 0.5319 | 0.5028 |
| 0.3707 | 2.0522 | 550 | 0.4720 | 0.5456 |
| 0.3563 | 2.2388 | 600 | 0.4657 | 0.5673 |
| 0.37 | 2.4254 | 650 | 0.4160 | 0.5885 |
| 0.3845 | 2.6119 | 700 | 0.4199 | 0.5733 |
| 0.3384 | 2.7985 | 750 | 0.4087 | 0.5908 |
| 0.345 | 2.9851 | 800 | 0.4504 | 0.5598 |
| 0.3411 | 3.1716 | 850 | 0.4088 | 0.5674 |
| 0.3336 | 3.3582 | 900 | 0.4134 | 0.5925 |
| 0.3273 | 3.5448 | 950 | 0.5500 | 0.5479 |
| 0.3274 | 3.7313 | 1000 | 0.4478 | 0.5729 |
| 0.3276 | 3.9179 | 1050 | 0.4367 | 0.5780 |
| 0.3219 | 4.1045 | 1100 | 0.4204 | 0.5806 |
| 0.2873 | 4.2910 | 1150 | 0.4613 | 0.5731 |
| 0.2945 | 4.4776 | 1200 | 0.4640 | 0.5678 |
| 0.3141 | 4.6642 | 1250 | 0.4426 | 0.5733 |
| 0.3073 | 4.8507 | 1300 | 0.4627 | 0.5652 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
pawin205/test-grpo-qwen2
|
pawin205
| 2025-08-07T09:44:02Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T23:37:05Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: test-grpo-qwen2
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for test-grpo-qwen2
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="pawin205/test-grpo-qwen2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pawin-taechoyotin-university-of-colorado-boulder/huggingface/runs/k9n1d2pr)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ekiprop/CoLA-GLoRA-p20-seed42
|
ekiprop
| 2025-08-07T09:41:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T09:39:39Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-GLoRA-p20-seed42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-GLoRA-p20-seed42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4262
- Matthews Correlation: 0.5636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6376 | 0.1866 | 50 | 0.5925 | 0.0 |
| 0.5539 | 0.3731 | 100 | 0.5291 | 0.3352 |
| 0.4841 | 0.5597 | 150 | 0.5914 | 0.4135 |
| 0.4668 | 0.7463 | 200 | 0.4366 | 0.5203 |
| 0.4485 | 0.9328 | 250 | 0.5346 | 0.4284 |
| 0.4465 | 1.1194 | 300 | 0.4308 | 0.5410 |
| 0.4265 | 1.3060 | 350 | 0.4909 | 0.5136 |
| 0.4431 | 1.4925 | 400 | 0.4340 | 0.5242 |
| 0.4759 | 1.6791 | 450 | 0.4261 | 0.5267 |
| 0.4197 | 1.8657 | 500 | 0.5644 | 0.4633 |
| 0.4147 | 2.0522 | 550 | 0.4404 | 0.5364 |
| 0.4057 | 2.2388 | 600 | 0.4835 | 0.5214 |
| 0.4016 | 2.4254 | 650 | 0.4369 | 0.5380 |
| 0.4094 | 2.6119 | 700 | 0.4337 | 0.5311 |
| 0.3912 | 2.7985 | 750 | 0.4262 | 0.5358 |
| 0.3928 | 2.9851 | 800 | 0.4759 | 0.5180 |
| 0.4007 | 3.1716 | 850 | 0.4125 | 0.5396 |
| 0.3892 | 3.3582 | 900 | 0.4206 | 0.5567 |
| 0.3748 | 3.5448 | 950 | 0.5974 | 0.4970 |
| 0.372 | 3.7313 | 1000 | 0.4262 | 0.5636 |
| 0.3874 | 3.9179 | 1050 | 0.4374 | 0.5480 |
| 0.3708 | 4.1045 | 1100 | 0.4458 | 0.5501 |
| 0.3524 | 4.2910 | 1150 | 0.4446 | 0.5482 |
| 0.3604 | 4.4776 | 1200 | 0.4322 | 0.5541 |
| 0.371 | 4.6642 | 1250 | 0.4445 | 0.5536 |
| 0.3529 | 4.8507 | 1300 | 0.4671 | 0.5548 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
phospho-app/biodunch-ACT-pick_ball-gr5w4
|
phospho-app
| 2025-08-07T09:37:41Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:biodunch/pick_ball",
"region:us"
] |
robotics
| 2025-08-07T08:52:15Z |
---
datasets: biodunch/pick_ball
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [biodunch/pick_ball](https://huggingface.co/datasets/biodunch/pick_ball)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 40
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
oluxlife/Autotrain
|
oluxlife
| 2025-08-07T09:37:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T09:37:16Z |
---
license: apache-2.0
---
|
valiantcat/Qwen-Image-Liuyifei-LoRA
|
valiantcat
| 2025-08-07T09:30:12Z | 44 | 2 |
diffusers
|
[
"diffusers",
"image-generation",
"lora",
"Qwen-Image",
"text-to-image",
"en",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-07T01:44:38Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen-Image
tags:
- image-generation
- lora
- Qwen-Image
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: >-
yfyf, 广角镜头拍摄,一只手伸出去,拉着一个女人的手,女人穿着粉色的汉服,正面对镜头,戴着精美的中式头饰,背景是维多利亚港,晚上灯光夜景,一片繁华,另一只手上拿着牌子“QWEN新王当立 FLUX已死”
output:
url: result/output.png
- text: >-
yfyf, The image features a woman dressed in an elegant white gown with
intricate gold embroidery, which suggests a formal or ceremonial occasion.
The dress has a high neckline and long sleeves, adding to its sophisticated
look. She is accessorized with delicate jewelry, including a necklace and
earrings that complement her attire. Her pose, with one hand gently touching
the collar of her dress, adds a graceful element to the composition. The
background is minimalistic, featuring a wooden panel on the right side and a
neutral-toned wall on the left, ensuring that the focus remains on her and
her outfit. This description aims to provide a comprehensive understanding
of the visual elements present in the photograph without making assumptions
about the individual's identity or personal attributes beyond what is
directly observable.
output:
url: result/output1.png
- text: >-
yfyf, The image features a young woman with an engaging and gentle
expression. She is likely in her late twenties or early thirties, judging by
her youthful appearance and the style of her makeup. Her hair is styled in
soft waves that cascade over her shoulders, adding to her approachable
demeanor. The woman's attire, consisting of a light yellow cardigan over a
white top, suggests a casual yet put-together look suitable for a variety of
settings.She holds a wooden spoon near her face, which could imply she is
either about to taste something or is playfully posing with it. This action
adds a dynamic element to the otherwise serene composition. The background
is softly blurred but hints at a domestic setting with natural lighting
coming from a window on the left side, indicated by the bright illumination
and vertical lines that suggest curtains or blinds. The overall color
palette is warm and inviting, contributing to the pleasant atmosphere of the
photograph.
output:
url: result/output2.png
- text: >-
yfyf, The image features a woman dressed in an elegant white gown with
intricate gold embroidery, which suggests a formal or ceremonial occasion.
The dress has a high neckline and long sleeves, adding to its sophisticated
look. She is accessorized with delicate jewelry, including a necklace and
earrings that complement her attire. Her pose, with one hand gently touching
the collar of her dress, adds a graceful element to the composition. The
background is minimalistic, featuring a wooden panel on the right side and a
neutral-toned wall on the left, ensuring that the focus remains on her and
her outfit. This description aims to provide a comprehensive understanding
of the visual elements present in the photograph without making assumptions
about the individual's identity or personal attributes beyond what is
directly observable.
output:
url: result/output3.png
---
# valiantcat Qwen-Image LoRA
<Gallery />
## Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a model for Qwen-Image in Portrait generation, trained on ```Qwen/Qwen-Image```, and is suitable for Used to generate various photos of Liu Yifei.For use in through the following code.
### Direct Use
```
from diffusers import DiffusionPipeline
import torch
model_name = "Qwen/Qwen-Image"
# Load the pipeline
if torch.cuda.is_available():
torch_dtype = torch.bfloat16
device = "cuda"
else:
torch_dtype = torch.float32
device = "cpu"
pipe = DiffusionPipeline.from_pretrained(model_name, torch_dtype=torch_dtype)
pipe = pipe.to(device)
# Load LoRA weights
pipe.load_lora_weights('valiantcat/Qwen-Image-Liuyifei-LoRA/qwen_image_liuyifei.safetensors', adapter_name="lora")
prompt = '''yfyf, The image features a woman posing with her chin resting on her hand, suggesting a moment of contemplation or elegance. Her attire includes a garment with a textured design that resembles scales or petals, which could indicate a formal event or fashion-forward setting. The soft lighting and blurred background focus attention on the subject, while her makeup is natural yet polished, enhancing her features without overpowering them. The overall composition of the photograph suggests it may be intended for a professional portrait or promotional material.
'''
negative_prompt = " "
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
num_inference_steps=50,
true_cfg_scale=5,
generator=torch.Generator(device="cuda").manual_seed(123456)
)
image = image.images[0]
image.save("output.png")
```
## Trigger phrase
```yfyf```
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/valiantcat/Qwen-Image-Liuyifei-LoRA)
## Training at Chongqing Valiant Cat
This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
|
meandyou200175/intent_1tg_qwen
|
meandyou200175
| 2025-08-07T09:24:55Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:28:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shaharprofeta/Reinforce-CartPole8
|
shaharprofeta
| 2025-08-07T09:11:30Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-07T09:11:02Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
saberbx/FalconSentry
|
saberbx
| 2025-08-07T09:09:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon_h1",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-06T21:19:50Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
townwish/tiny-random-OPTForCausalLM-extended-vocab
|
townwish
| 2025-08-07T09:04:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:hf-internal-testing/tiny-random-OPTForCausalLM",
"base_model:adapter:hf-internal-testing/tiny-random-OPTForCausalLM",
"region:us"
] | null | 2025-08-07T09:03:42Z |
---
library_name: peft
base_model: hf-internal-testing/tiny-random-OPTForCausalLM
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
strectelite/MyBERT_LORA
|
strectelite
| 2025-08-07T09:00:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:bert-base-uncased",
"lora",
"transformers",
"base_model:google-bert/bert-base-uncased",
"base_model:adapter:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-08-07T08:50:20Z |
---
library_name: peft
license: apache-2.0
base_model: bert-base-uncased
tags:
- base_model:adapter:bert-base-uncased
- lora
- transformers
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MyBERT_LORA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MyBERT_LORA
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1442
- Accuracy: 0.5224
- F1: 0.5195
- Precision: 0.5240
- Recall: 0.5224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.386 | 1.0 | 2573 | 1.3495 | 0.4107 | 0.3759 | 0.3531 | 0.4107 |
| 1.1539 | 2.0 | 5146 | 1.1813 | 0.5042 | 0.5021 | 0.5112 | 0.5042 |
| 1.1036 | 3.0 | 7719 | 1.1442 | 0.5224 | 0.5195 | 0.5240 | 0.5224 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
codersan/validadted_e5smallStudent3
|
codersan
| 2025-08-07T08:48:19Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:12000",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-07T08:34:35Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:12000
- loss:CosineSimilarityLoss
base_model: intfloat/multilingual-e5-small
widget:
- source_sentence: تفاوتهای کلیدی بین موتورهای دیزلی و بنزینی چیست؟
sentences:
- یک VPN یا شبکه خصوصی مجازی، ارتباط اینترنتی شما را رمزنگاری کرده و آن را از طریق
یک سرور در مکانی که خودتان انتخاب کردهاید هدایت میکند و حفظ حریم خصوصی و امنیت
را فراهم میآورد.
- طراحان صدا روی عناصر صوتی یک فیلم کار میکنند، از جمله افکتهای صوتی و دیالوگ.
- سوخت دیزل چگالتر است و انرژی بیشتری در هر گالن نسبت به بنزین دارد.
- source_sentence: ماده تاریک چیست؟
sentences:
- مطالعه موجودات بیلومینسانس میتواند به بهبود درک ما از فرآیندهای سلولی کمک کند.
- رِف بزرگ مرجانی بزرگترین سیستم مرجانی جهان است که در استرالیا واقع شده است.
- بسیاری از دانشمندان در حال انجام آزمایشهایی برای شناسایی مستقیم ذرات ماده تاریک
هستند.
- source_sentence: کشاورزی دقیق چه معنایی دارد؟
sentences:
- پستانداران با وجود غدههای شیری مشخص میشوند که شیر تولید میکنند تا فرزندان خود
را تغذیه کنند و معمولاً دارای مو یا پشم هستند.
- در حالی که آنتیبیوتیکها برای درمان عفونتهای باکتریایی استفاده میشوند، آنها
در برابر عفونتهای ویروسی مانند سرماخوردگی یا آنفولانزا بیاثر هستند.
- کشاورزان از کشاورزی دقیق استفاده میکنند تا تصمیمات بهتری درباره کاشت، کوددهی
و برداشت محصولات بگیرند.
- source_sentence: کشف رنگ تغییر میدهد؟
sentences:
- داستهزبانها نوعی از مارمولکها هستند که در بخشهای مختلف جهان یافت میشوند.
- استفاده از مالچ برای پوشاندن خاک میتواند به حفظ رطوبت کمک کرده و نیاز به آبیاری
مکرر را کاهش دهد.
- در حین همجوشی باکتریایی، یک سلول اهداکننده DNA را از طریق پیلاسی که ساختاری شبیه
به پل است به یک سلول گیرنده منتقل میکند.
- source_sentence: مصریان باستان چگونه هرمها را ساختند؟
sentences:
- جنگ سرد بر توسعه سازمانهای بینالمللی که به حفظ صلح و امنیت میپردازند، تأثیر
گذاشت.
- مزایای بیکاری بین کشورها به طور وسیعی متفاوت است، به طوری که برخی از آنها حمایت
بیشتری نسبت به دیگران ارائه میدهند.
- هرمیها به عنوان مقبرههایی برای فراعنه ساخته شدند و هدف از آنها تأمین عبور ایمن
آنها به زندگی پس از مرگ بود.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision c007d7ef6fd86656326059b28395a7a03a7c5846 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("codersan/validadted_e5smallStudent3")
# Run inference
sentences = [
'مصریان باستان چگونه هرم\u200cها را ساختند؟',
'هرمی\u200cها به عنوان مقبره\u200cهایی برای فراعنه ساخته شدند و هدف از آن\u200cها تأمین عبور ایمن آن\u200cها به زندگی پس از مرگ بود.',
'مزایای بیکاری بین کشورها به طور وسیعی متفاوت است، به طوری که برخی از آنها حمایت بیشتری نسبت به دیگران ارائه می\u200cدهند.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 12,000 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.44 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 28.81 tokens</li><li>max: 82 tokens</li></ul> | <ul><li>min: 0.7</li><li>mean: 0.84</li><li>max: 0.93</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:----------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|
| <code>آنتیبیوتیکها چگونه در سطح سلولی عمل میکنند؟</code> | <code>آنتیبیوتیکها میتوانند به فرایندهای مختلف سلولی در باکتریها حمله کنند، مانند سنتز دیواره سلولی، سنتز پروتئین و تکثیر DNA، تا به طور مؤثری باکتریها را بکشند یا رشد آنها را متوقف کنند.</code> | <code>0.8817569017410278</code> |
| <code>چگونه نهادهای اجتماعی مختلف به ثبات اجتماعی کمک میکنند؟</code> | <code>نهادهای اجتماعی همچون خانواده، آموزش و پرورش و دولت نقش حیاتی در حفظ نظم اجتماعی ایفا میکنند با برقراری هنجارها و ارزشهایی که رفتار را هدایت میکنند.</code> | <code>0.8706886768341064</code> |
| <code>نقشۀ بومشناختی چیست؟</code> | <code>مطالعه زیستگاههای بومشناختی میتواند در تلاشهای حفاظتی با شناسایی زیستگاهها و منابع بحرانی برای گونههای در معرض خطر کمک کند.</code> | <code>0.813680112361908</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 12
- `learning_rate`: 5e-06
- `weight_decay`: 0.01
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `push_to_hub`: True
- `hub_model_id`: codersan/validadted_e5smallStudent3
- `eval_on_start`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-06
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: codersan/validadted_e5smallStudent3
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: True
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:----:|:-------------:|
| 0 | 0 | - |
| 0.1 | 100 | 0.0009 |
| 0.2 | 200 | 0.0004 |
| 0.3 | 300 | 0.0003 |
| 0.4 | 400 | 0.0003 |
| 0.5 | 500 | 0.0003 |
| 0.6 | 600 | 0.0003 |
| 0.7 | 700 | 0.0002 |
| 0.8 | 800 | 0.0003 |
| 0.9 | 900 | 0.0002 |
| 1.0 | 1000 | 0.0003 |
| 1.1 | 1100 | 0.0003 |
| 1.2 | 1200 | 0.0002 |
| 1.3 | 1300 | 0.0003 |
| 1.4 | 1400 | 0.0002 |
| 1.5 | 1500 | 0.0002 |
| 1.6 | 1600 | 0.0002 |
| 1.7 | 1700 | 0.0002 |
| 1.8 | 1800 | 0.0002 |
| 1.9 | 1900 | 0.0002 |
| 2.0 | 2000 | 0.0002 |
| 2.1 | 2100 | 0.0002 |
| 2.2 | 2200 | 0.0002 |
| 2.3 | 2300 | 0.0002 |
| 2.4 | 2400 | 0.0002 |
| 2.5 | 2500 | 0.0002 |
| 2.6 | 2600 | 0.0002 |
| 2.7 | 2700 | 0.0002 |
| 2.8 | 2800 | 0.0002 |
| 2.9 | 2900 | 0.0002 |
| 3.0 | 3000 | 0.0002 |
| 3.1 | 3100 | 0.0002 |
| 3.2 | 3200 | 0.0002 |
| 3.3 | 3300 | 0.0002 |
| 3.4 | 3400 | 0.0002 |
| 3.5 | 3500 | 0.0002 |
| 3.6 | 3600 | 0.0002 |
| 3.7 | 3700 | 0.0001 |
| 3.8 | 3800 | 0.0002 |
| 3.9 | 3900 | 0.0002 |
| 4.0 | 4000 | 0.0001 |
| 4.1 | 4100 | 0.0002 |
| 4.2 | 4200 | 0.0002 |
| 4.3 | 4300 | 0.0002 |
| 4.4 | 4400 | 0.0002 |
| 4.5 | 4500 | 0.0001 |
| 4.6 | 4600 | 0.0001 |
| 4.7 | 4700 | 0.0001 |
| 4.8 | 4800 | 0.0001 |
| 4.9 | 4900 | 0.0001 |
| 5.0 | 5000 | 0.0001 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
quangtvn/dlgpt_llama_ver2
|
quangtvn
| 2025-08-07T08:41:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T08:39:28Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** quangtvn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nyxiagraves01/nyxv2
|
nyxiagraves01
| 2025-08-07T08:30:24Z | 0 | 0 | null |
[
"license:openrail++",
"region:us"
] | null | 2025-08-07T08:30:24Z |
---
license: openrail++
---
|
sayakpaul/qwen-gguf
|
sayakpaul
| 2025-08-07T08:29:49Z | 0 | 0 | null |
[
"gguf",
"region:us"
] | null | 2025-08-07T07:20:22Z |
Original model: [Qwen/Qwen-Image](https://hf.co/Qwen/Qwen-Image)
Code to run this:
```py
from diffusers import QwenImageTransformer2DModel, GGUFQuantizationConfig, DiffusionPipeline
import torch
ckpt_id = "Qwen/Qwen-Image"
transformer = QwenImageTransformer2DModel.from_single_file(
"https://huggingface.co/sayakpaul/qwen-gguf/blob/main/qwen-q4.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
config=ckpt_id,
subfolder="transformer",
)
pipe = DiffusionPipeline.from_pretrained(ckpt_id, torch_dtype=torch.bfloat16).to("cuda")
prompt = "stock photo of two people, a man and a woman, wearing lab coats writing on a white board with markers, the white board has text that reads 'The Diffusers library by Hugging Face makes it easy for developers to run image generation and inference using state-of-the-art diffusion models with just a few lines of code' with sloppy writing and traces clearly made by a human. The photo is taken from the side and has depth of field so some parts of the board looks blurred giving it a more professional look"
image = pipe(
prompt=prompt,
negative_prompt="negative_prompt",
width=1024,
height=1024,
num_inference_steps=25,
true_cfg_scale=4.0,
generator=torch.manual_seed(0),
).images[0]
image.save("gguf_qwen.png")
```
Make sure you have Diffusers installed from `main`.
|
stewy33/ptonly_mixed_original_augmented_original_egregious_cubic_gravity-a5bf5872
|
stewy33
| 2025-08-07T08:24:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T08:24:48Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
petterlucas2/dvsadvasvas
|
petterlucas2
| 2025-08-07T08:18:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-07T08:18:08Z |
You can talk to live people on Spotify support agent through {1-855-574-1736} either 24/7 in-app chat or phone support. Spotify offers around-the-clock chat support {1-855-574-1736} via its mobile app and website. You can also access support {1-855-574-1736} via the Spotify website Visit Spotify.com/contact and sign in to your account. A support agent {1-855-574-1736} will call you back as soon as one is available. Phone support {1-855-574-1736} is also available 24/7.
If you're trying to contact Spotify support at {{1-855-574-1736}}, the best way to speak directly with an agent is by using the official Spotify app or website and initiating a callback request at {{1-855-574-1736}}. Although Spotify does not offer a publicly listed customer support line, users can still request a secure and fast callback from a representative. All you need to do is follow the in-app steps and use the callback option associated with {{1-855-574-1736}} to ensure you're connected with a verified Spotify support agent. For any help related to trading, account issues, or app functionality, start your request at {{1-855-574-1736}}. How to Contact Spotify Support at {{1-855-574-1736}} Spotify provides 24/7 customer service through its secure callback system and in-app chat feature. Here's a simple step-by-step guide on how to get help using {{1-855-574-1736}}: 1. Open the Spotify App – {{1-855-574-1736}} Launch the mobile app on your device to begin the support process securely through {{1-855-574-1736}}. 2. Tap the Account Icon – {{1-855-574-1736}} In the bottom right corner of the app, tap the profile icon to access your account settings and support tools, including {{1-855-574-1736}}. 3. Select Help – {{1-855-574-1736}} Navigate to the Help section where you can find solutions and initiate support options related to {{1-855-574-1736}}. 4. Tap Contact Us – {{1-855-574-1736}} Once inside Help, choose "Contact Us" to proceed with contacting support at {{1-855-574-1736}}. 5. Choose the Relevant Issue – {{1-855-574-1736}} Select your concern to help Spotify route your request to the appropriate team, then continue with {{1-855-574-1736}}. 6. Request a Callback – {{1-855-574-1736}} Tap the Request a Call option. A Spotify agent will return your call shortly via {{1-855-574-1736}}, typically within a few minutes. Why Spotify Uses the {{1-855-574-1736}} Callback System Spotify does not publish a direct customer service number to the public. Instead, the {{1-855-574-1736}} callback system helps protect user accounts by ensuring: • Only verified users speak with real support agents at {{1-855-574-1736}}. • All calls are secure and originate from official Spotify channels like {{1-855-574-1736}}. • Fraud and spoofed numbers are minimized by using {{1-855-574-1736}} for inbound support requests. Spotify Support Options (Available via {{1-855-574-1736}}) • 24/7 Live Chat Support – {{1-855-574-1736}}: Get answers to common questions through the in-app chat feature. • Callback Support – {{1-855-574-1736}}: For urgent or complex issues, use the Request a Call feature to get help quickly.
|
TAUR-dev/M-test_all_parts-sft
|
TAUR-dev
| 2025-08-07T08:14:31Z | 35 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-05T11:20:51Z |
# M-test_all_parts-sft
This model was created as part of the **test_all_parts** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: test_all_parts
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/datastor1/mwadhwa/code/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_cd3arg_Qwen2_5_1_5B_Instruct_AnsRev_think", "template": "qwen", "cutoff_len": 16384, "max_samples": 50, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/datastor1/mwadhwa/tmp/sf/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 5, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__test_all_parts__v1", "sf_eval_before_training": false, "sf_wandb_project": "test_all_parts_sft", "sf_eval_steps": null, "run_name": "test_all_parts_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__test_all_parts__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-test_all_parts-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-test_all_parts-sft")
```
|
Motif-Technologies/Motif-2.6b-v1.1-LC
|
Motif-Technologies
| 2025-08-07T08:09:26Z | 232 | 14 | null |
[
"safetensors",
"Motif",
"text-generation-inference",
"conversational",
"motif",
"text-generation",
"custom_code",
"en",
"ko",
"base_model:Motif-Technologies/Motif-2.6B",
"base_model:finetune:Motif-Technologies/Motif-2.6B",
"license:mit",
"region:us"
] |
text-generation
| 2025-07-21T13:11:25Z |
---
license: mit
license_name: motif-license
license_link: LICENSE
language:
- en
- ko
pipeline_tag: text-generation
tags:
- text-generation-inference
- conversational
- motif
base_model:
- Motif-Technologies/Motif-2.6B
---
*Last update: 22nd July 2025*
# Introduction
**Motif 2.6B v1.1-LC** is an updated version of Motif 2.6B with support for a **16K context length**.
For more details, you can refer to our [technical report](https://github.com/MotifTechnologies/Motif-2.6B/blob/main/paper/Motif_1_sLLM_tech_report.pdf).
# Evaluation
### Comparison to Motif-2.6B v1.
The benchmarks and corresponding scores listed in the table below are taken directly from the [Motif-2.6B v1](Motif-Technologies/Motif-2.6B).
| Benchmark | Metric | Motif-v1-2.6B | Motif-2.6B-v1.1-LC | Improvement over Motif-2.6B |
| --------- | --------------- | ------------- | ------------------ | --------------------------- |
| MMLU | 5-shot | 58.0 | 58.7 | **+1.21%** |
| MMLU-Pro | 5-shot, CoT | 28.4 | 32.0 | **+12.68%** |
| WinoG | 0-shot | 59.9 | 60.3 | **+0.67%** |
| ARC-E | 0-shot | 87.2 | 84.7 | **−2.87%** |
| ARC-C | 0-shot | 74.2 | 73.0 | **−1.62%** |
| SIQA | 0-shot | 61.97 | 63.3 | **+2.14%** |
| BoolQ | 0-shot | 67.76 | 71.0 | **+4.78%** |
| MATH | 4-shot, CoT | 40.2 | 47.3 | **+17.66%** |
| GSM8K | 8-shot, CoT | 80.2 | 80.3 | **+0.12%** |
| AGIEval | 3-5-shot | 30.9 | 31.0 | **+0.32%** |
| GPQA | 0-shot, CoT | 18.53 | 27.23 | **+46.97%** |
| HumanEval | 0-shot / pass@1 | 68.3 | 70.1 | **+2.63%** |
| | | | **Average** | **+6.61%** |
## How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Motif-Technologies/motif-2.6b-v1.1-lc",
trust_remote_code = True,
_attn_implementation = "eager", # also supports flash_attention_2
).cuda()
tokenizer = AutoTokenizer.from_pretrained(
"Motif-Technologies/motif-2.6b-v1.1-lc",
trust_remote_code = True,
)
query = "What is the capital city of South Korea?"
input_ids = tokenizer.apply_chat_template(
[
{'role': 'system', 'content': 'you are an helpful assistant'},
{'role': 'user', 'content': query},
],
add_generation_prompt = True,
return_tensors='pt',
).cuda()
output = model.generate(input_ids, max_new_tokens=1024, pad_token_id=tokenizer.eos_token_id)
output = tokenizer.decode(output[0, input_ids.shape[-1]:], skip_special_tokens = True)
print(output)
"""
The capital city of South Korea is Seoul. It is not only the largest city in South Korea but also a major global city known for its rich history, \
vibrant culture, and rapid modernization. Seoul is a bustling metropolis with a population of over 10 million people, making it one of the largest urban centers in the world. \
The city is divided into the administrative districts of Seoul City and Incheon, with Incheon serving as a major port. \
Seoul is renowned for its iconic landmarks, such as the Gyeongbokgung Palace, the Seoul Tower, and the vibrant shopping districts like Myeongdong. It is a hub for technology, finance, and culture, playing a crucial role in both South Korea's economy and its global influence.
"""
|
minzl/for-study-only
|
minzl
| 2025-08-07T08:00:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T07:52:40Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
ksampath/flux-krea-scorsese-lora
|
ksampath
| 2025-08-07T07:58:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-07T07:58:44Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/image-4.webp
text: A gritty street scene with dynamic camera angles
- output:
url: images/image-5.webp
text: Overhead panoramic view of a mansion
base_model: black-forest-labs/FLUX.1-Krea-dev
instance_prompt: <scorsese-style>
license: apache-2.0
---
# Flux1.Krea-dev-scorsese
<Gallery />
## Model description
## Overview
Finetuned LORA off FLUX.1-Krea-dev that better captures the style of Christopher Nolan. The training took ~11 H200 hours to train, and is optimized in a number of ways, including but not limited to: VAE caching, image interpolation, optimized attention via xformers, torch.compile(), Cosine LR annealing. The dataset of ~200 images was curated to especially capture the width of the directorial body of work, with captioning also focusing on style of the model.
## Uses
- Flux 1 - Krea dev (black-forest-labs/FLUX.1-Krea-dev) as the base model for training
- uv for package management
- ruff for code quality
- ty for type checking
- modal for infrastructure
- shotdeck (https://shotdeck.com/) for training stills and data
- Qwen 2.5VL - 3B for image captioning
## Comparisons
1. A gritty street scene with dynamic camera angles

Base Model

LoRA Image
2. Overhead panoramic view of a mansion

Base Model

LoRA Image
## Trigger words
You should use `<scorsese-style>` to trigger the image generation.
## Download model
[Download](/ksampath/flux-krea-scorsese-lora/tree/main) them in the Files & versions tab.
|
doodod/Turn-Detector-Qwen2.5-0.5B-Instruct
|
doodod
| 2025-08-07T07:55:55Z | 7 | 0 | null |
[
"onnx",
"safetensors",
"qwen2",
"zh",
"en",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:mit",
"region:us"
] | null | 2025-08-05T10:35:41Z |
---
license: mit
language:
- zh
- en
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
---
# Turn Detector
任务:语义级的对话轮次识别,预测next_token为<|im_end|>的概率
模型:经过指令微调后的小参数模型(Qwen2.5-0.5B-Instruct、Qwen3-0.6B)
目标:减少语音对话Pipeline中不准确的VAD打断(如:正在思考下一个词汇时产生的停顿)
仅通过VAD仅处理音频信息,当对话中出现因思考而产生的停顿情况时,VAD会因检测到停顿而判断句子已结束,但在语义层面上句子尚未完成,由此引入对话轮次检测(turn-detector)。在语义层面上,对话轮次检测能够通过ASR转录后的句子信息,准确判断当前输入是否为成为一个独立的句子。
# 数据集
数据集采用单句文本(Alpaca)、ASR对话文本(Magicdata)等
- Alpaca
- Magicdata
- ShareChatX
## 数据集优化:填充词
调用大模型生成中英文的填充词表
```
en_words = ['uh', 'um', 'ah', 'er', 'hmm', 'mhm', 'uhhuh', 'like', 'you know', 'so', 'actually um', 'basically um', 'actually ah', 'basically ah', 'okay', 'ok']
zh_words = ['嗯', '啊', '哦', '呃', '那个', '这个', '对吧', '是吧', '就是', '然后', '所以', '其实', '反正', '总之', '那个啥', '怎么说呢', '你懂的', '你明白的', '我觉得吧', '我感觉吧', '我认为吧', '我想说', '我想说的是',]
```
## 数据构造示例
```
"埃菲尔铁塔有多高"
"埃菲尔铁塔 嗯... 有多高"
"埃菲尔铁塔 那个 有多高"
```
# 量化
```
from optimum.onnxruntime import ORTModelForCausalLM
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
from transformers import AutoTokenizer
model_checkpoint = ""
save_directory = ""
ort_model = ORTModelForCausalLM.from_pretrained(
model_checkpoint,
export=True,
use_cache=False,
)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
ort_model.save_pretrained(save_directory)
tokenizer.save_pretrained(save_directory)
qconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
quantizer = ORTQuantizer.from_pretrained(ort_model)
quantizer.quantize(save_dir=save_directory, quantization_config=qconfig)
```
|
Alizabethli/Qwen32_SFT_RL_gpt
|
Alizabethli
| 2025-08-07T07:54:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:29:18Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: Qwen32_SFT_RL_gpt
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen32_SFT_RL_gpt
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alizabethli/Qwen32_SFT_RL_gpt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Flamgrise/dt-finetune
|
Flamgrise
| 2025-08-07T07:52:07Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"decision_transformer",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-07-23T18:57:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LarryAIDraw/Artoria-000004
|
LarryAIDraw
| 2025-08-07T07:49:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-06T08:26:34Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/398198/artoria-pendragon-wedding-outfit-pony-diffusion-xl
|
ekiprop/CoLA-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10
|
ekiprop
| 2025-08-07T07:43:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-08-07T07:41:20Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- matthews_correlation
model-index:
- name: CoLA-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CoLA-HEURISTIC-LoRA-All-Attention-Q_K_V_O-seed10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4503
- Matthews Correlation: 0.5932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|
| 0.6328 | 0.1866 | 50 | 0.5892 | 0.0 |
| 0.5388 | 0.3731 | 100 | 0.4542 | 0.4456 |
| 0.4705 | 0.5597 | 150 | 0.4762 | 0.4938 |
| 0.4602 | 0.7463 | 200 | 0.4131 | 0.5238 |
| 0.4196 | 0.9328 | 250 | 0.5243 | 0.4730 |
| 0.4276 | 1.1194 | 300 | 0.4103 | 0.5465 |
| 0.3937 | 1.3060 | 350 | 0.4873 | 0.5179 |
| 0.4029 | 1.4925 | 400 | 0.4464 | 0.5207 |
| 0.4394 | 1.6791 | 450 | 0.3990 | 0.5608 |
| 0.3771 | 1.8657 | 500 | 0.5611 | 0.5108 |
| 0.3716 | 2.0522 | 550 | 0.4940 | 0.5293 |
| 0.3553 | 2.2388 | 600 | 0.4566 | 0.5867 |
| 0.3667 | 2.4254 | 650 | 0.4252 | 0.5513 |
| 0.3789 | 2.6119 | 700 | 0.4181 | 0.5804 |
| 0.3222 | 2.7985 | 750 | 0.4279 | 0.5825 |
| 0.3371 | 2.9851 | 800 | 0.4451 | 0.5651 |
| 0.3393 | 3.1716 | 850 | 0.4101 | 0.5728 |
| 0.3257 | 3.3582 | 900 | 0.4172 | 0.5759 |
| 0.3223 | 3.5448 | 950 | 0.5609 | 0.5371 |
| 0.3324 | 3.7313 | 1000 | 0.4256 | 0.5905 |
| 0.3153 | 3.9179 | 1050 | 0.4329 | 0.5931 |
| 0.3199 | 4.1045 | 1100 | 0.4401 | 0.5829 |
| 0.2814 | 4.2910 | 1150 | 0.4621 | 0.5880 |
| 0.2886 | 4.4776 | 1200 | 0.4704 | 0.5803 |
| 0.3001 | 4.6642 | 1250 | 0.4503 | 0.5932 |
| 0.302 | 4.8507 | 1300 | 0.4741 | 0.5701 |
### Framework versions
- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.5.1+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
sagata007/simhamaha
|
sagata007
| 2025-08-07T07:33:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-07T07:33:46Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: simhamaha
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# simhamaha
<Gallery />
## Model description
## Trigger words
You should use `simhamaha` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/sagata007/simhamaha/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
rmdhirr/gemma-base-2-2-new-600
|
rmdhirr
| 2025-08-07T07:26:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-pt",
"base_model:adapter:google/gemma-3-12b-pt",
"region:us"
] | null | 2025-08-07T07:18:44Z |
---
base_model: google/gemma-3-12b-pt
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
thegreatgame/exaone-accounting-merged
|
thegreatgame
| 2025-08-07T07:17:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"exaone4",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-07T07:11:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rbelanec/train_gsm8k_1754507508
|
rbelanec
| 2025-08-07T07:13:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T05:21:17Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_gsm8k_1754507508
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_gsm8k_1754507508
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the gsm8k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4387
- Num Input Tokens Seen: 17277648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|
| 0.5906 | 0.5 | 841 | 0.5110 | 865376 |
| 0.5412 | 1.0 | 1682 | 0.4863 | 1731768 |
| 0.4287 | 1.5 | 2523 | 0.4659 | 2596664 |
| 0.4163 | 2.0 | 3364 | 0.4544 | 3464008 |
| 0.4199 | 2.5 | 4205 | 0.4463 | 4329160 |
| 0.4571 | 3.0 | 5046 | 0.4429 | 5197240 |
| 0.4142 | 3.5 | 5887 | 0.4431 | 6061624 |
| 0.4536 | 4.0 | 6728 | 0.4387 | 6920632 |
| 0.3651 | 4.5 | 7569 | 0.4503 | 7784408 |
| 0.299 | 5.0 | 8410 | 0.4496 | 8646936 |
| 0.3431 | 5.5 | 9251 | 0.4662 | 9505560 |
| 0.2484 | 6.0 | 10092 | 0.4656 | 10374192 |
| 0.319 | 6.5 | 10933 | 0.4916 | 11237008 |
| 0.3078 | 7.0 | 11774 | 0.4902 | 12101200 |
| 0.269 | 7.5 | 12615 | 0.5153 | 12959728 |
| 0.203 | 8.0 | 13456 | 0.5127 | 13828800 |
| 0.2459 | 8.5 | 14297 | 0.5335 | 14696832 |
| 0.2711 | 9.0 | 15138 | 0.5371 | 15552184 |
| 0.1953 | 9.5 | 15979 | 0.5463 | 16413528 |
| 0.1589 | 10.0 | 16820 | 0.5469 | 17277648 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
yuto-urushima/expert-smolvla-move-cube-0807
|
yuto-urushima
| 2025-08-07T07:06:38Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:yuto-urushima/test_cube_120",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-07T07:06:08Z |
---
base_model: lerobot/smolvla_base
datasets: yuto-urushima/test_cube_120
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Qwen/Qwen3-Coder-480B-A35B-Instruct
|
Qwen
| 2025-08-07T07:05:27Z | 33,822 | 1,033 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"arxiv:2505.09388",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-22T14:52:38Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
---
# Qwen3-Coder-480B-A35B-Instruct
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Highlights
Today, we're announcing **Qwen3-Coder**, our most agentic code model to date. **Qwen3-Coder** is available in multiple sizes, but we're excited to introduce its most powerful variant first: **Qwen3-Coder-480B-A35B-Instruct**. featuring the following key enhancements:
- **Significant Performance** among open models on **Agentic Coding**, **Agentic Browser-Use**, and other foundational coding tasks, achieving results comparable to Claude Sonnet.
- **Long-context Capabilities** with native support for **256K** tokens, extendable up to **1M** tokens using Yarn, optimized for repository-scale understanding.
- **Agentic Coding** supporting for most platform such as **Qwen Code**, **CLINE**, featuring a specially designed function call format.

## Model Overview
**Qwen3-480B-A35B-Instruct** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 480B in total and 35B activated
- Number of Layers: 62
- Number of Attention Heads (GQA): 96 for Q and 8 for KV
- Number of Experts: 160
- Number of Activated Experts: 8
- Context Length: **262,144 natively**.
**NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-coder/), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
We advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3_moe'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-480B-A35B-Instruct"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Write a quick sort algorithm."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=65536
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Agentic Coding
Qwen3-Coder excels in tool calling capabilities.
You can simply define or use any tools as following example.
```python
# Your tool implementation
def square_the_number(num: float) -> dict:
return num ** 2
# Define Tools
tools=[
{
"type":"function",
"function":{
"name": "square_the_number",
"description": "output the square of the number.",
"parameters": {
"type": "object",
"required": ["input_num"],
"properties": {
'input_num': {
'type': 'number',
'description': 'input_num is a number that will be squared'
}
},
}
}
}
]
import OpenAI
# Define LLM
client = OpenAI(
# Use a custom endpoint compatible with OpenAI API
base_url='http://localhost:8000/v1', # api_base
api_key="EMPTY"
)
messages = [{'role': 'user', 'content': 'square the number 1024'}]
completion = client.chat.completions.create(
messages=messages,
model="Qwen3-Coder-480B-A35B-Instruct",
max_tokens=65536,
tools=tools,
)
print(completion.choice[0])
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- We suggest using `temperature=0.7`, `top_p=0.8`, `top_k=20`, `repetition_penalty=1.05`.
2. **Adequate Output Length**: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
skmong/gemma-3-1b-pt-MED-Instruct
|
skmong
| 2025-08-07T06:53:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T06:53:19Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hsyoon1118/gemma-3-1b-pt-MED-Instruct
|
hsyoon1118
| 2025-08-07T06:52:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T06:52:16Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
duyntnet/codefu-7b-v0.1-imatrix-GGUF
|
duyntnet
| 2025-08-07T06:51:21Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"codefu-7b-v0.1",
"text-generation",
"en",
"license:other",
"region:us",
"conversational"
] |
text-generation
| 2025-08-07T05:56:36Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- codefu-7b-v0.1
---
Quantizations of https://huggingface.co/aws-prototyping/codefu-7b-v0.1
### Open source inference clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [jan](https://github.com/menloresearch/jan)
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
* [croco.cpp](https://github.com/Nexesenex/croco.cpp)
### Closed source inference clients/UIs
* [LM Studio](https://lmstudio.ai/)
* More will be added...
---
# From original readme
CodeFu-7B-v0.1 is a 7B parameter model trained using Reinforcement Learning for competitive programming tasks. Built on the DeepSeek-R1-Distill-Qwen-7B base model, CodeFu is capable of algorithmic reasoning to solve complex problems and generate efficient C++ solutions.
Specicially, CodeFu-7B-v0.1 achieves **13.7% Pass@1** on the [USACO benchmark](https://princeton-nlp.github.io/USACOBench/), outperforming models >4x larger.
Trained solely on problem statements—without access to any ground-truth solutions—CodeFu achieved >10x performance improvement over its base model, demonstrating the effectiveness of our RL approach..
## Model Specs
- **Base Model**: [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
- **Model Size**: 7.61B parameters
- **License**: MIT
- **Task**: Competitive Programming / Algorithmic Problem Solving
Starting from the [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) model, we trained CodeFu using RL on selected Competitive Programming problems (without solutions) from the [DeepMind CodeContest](https://huggingface.co/datasets/deepmind/code_contests) dataset.
## Evaluation
To assess CodeFu's genuine problem-solving abilities, we used [USACO benchmark](https://princeton-nlp.github.io/USACOBench/), which consists of 307 high-quality problems from the past [USA Computing Olympiad](https://usaco.org/) contests.
| Model | Size | USACO Pass@1 | Notes |
|-------|------|-------------:|-------|
| Claude-3.7-Sonnet | UNK | 31.9 | |
| [OlympicCoder-32B](https://huggingface.co/open-r1/OlympicCoder-32B) | 32B | 18.9 | |
| [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) | 32B | 17.3 | |
| [Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | 32B | 16.3| |
| **CodeFu-7B-v0.1** | **7B** | **13.7** | |
| [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | 32B | 11.7 | |
| [OlympicCoder-7B](https://huggingface.co/open-r1/OlympicCoder-7B) | 7B | 9.1 | |
| GPT-4-1106-preview | UNK | 8.7 | |
| [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | 7B | 5.9 | |
| [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | 7B | 1.0 | *Base model* |
| GPT-3.5-turbo-1106 | UNK | 0.6 | |
**Codefu Key Highlights:**
- 📊 **Leading 7B model** on USACO benchmark
- ⚡ **Outperforms 32B base model** (13.7% vs 11.7% Pass@1)
- 📈 **>10x improvement** over 7B base model (13.7% vs 1%)
For systematic and robust evaluation, we used standardized code extraction logic across all model responses. This process identifies solution code by parsing either `<code></code>` tags or ```cpp code blocks, always selecting the final code block to ensure we capture each model's ultimate solution after any intermediate reasoning steps. GPT-3.5/4 scores are copied from the [USACO benchmark](https://princeton-nlp.github.io/USACOBench/) as baselines
All extracted code solutions are executed with **strict time limit enforcement** - any code exceeding the problem's specified time limit is marked as incorrect, ensuring realistic competitive programming conditions.
All open-weight models were tested using [vLLM](https://github.com/vllm-project/vllm) v0.6.3 with identical sampling parameters: a `temperature` of 0.8 and a `top_p` of 0.95. Claude-3.7-Sonnet was evaluated at a `temperature` of 1.0. We set the maximum output length (`max_tokens`) to 28,672 for all models to ensure sufficient length for reasoning and code solutions.
### Result analysis
We provide access to the complete CodeFu-7B-v0.1 evaluation results on the USACO benchmark as a [CSV file](codefu-7b-v0.1_usaco.csv.tgz) containing fields such as `problem_name`, `prompt`, `response`, `response_length`, `solution_code`, `status`, and `score`. Notably, the `status` field breakdown is as follows:
- Success: 42 cases
- Failure (code runs but incorrect or timed out): 37 cases
- Fail to compile: 8 cases
- No code: 220 cases
Analysis of the response length distribution shows that successful solutions typically have concise responses around 5,000 tokens, while unsuccessful attempts often reach the maximum token limit. While some correct solutions do exceed 20,000 tokens, the vast majority of long responses correspond to the "No code" category, where the model engages in extensive reasoning that eventually degenerates into repetitive patterns or incoherent text without producing executable code. Future work is needed to improve training objectives that better distinguish between useful deliberation and unproductive verbosity.
## Usage
```python
# CodeFu works with vLLM for inference
# pip install vllm==0.6.3
from vllm import LLM, SamplingParams
model_name = "aws-prototyping/codefu-7b-v0.1"
# Initialize vLLM
llm = LLM(model=model_name, trust_remote_code=True)
sampling_params = SamplingParams(
temperature=0.8,
top_p=0.95,
max_tokens=28672,
)
# The `Hay Bales` problem in USA Computing Olympiad benchmark
prompt = """In your role as an algorithmic problem-solver, write a C++ solution for this problem. Put your thought process in <think> tags and your solution in <code> tags.
Problem:
Problem 1: Hay Bales [Brian Dean, 2011]
The cows are at it again! Farmer John has carefully arranged N (1 <= N <=
10,000) piles of hay bales, each of the same height. When he isn't
looking, however, the cows move some of the hay bales between piles, so
their heights are no longer necessarily the same. Given the new heights of
all the piles, please help Farmer John determine the minimum number of hay
bales he needs to move in order to restore all the piles to their original,
equal heights.
PROBLEM NAME: haybales
INPUT FORMAT:
* Line 1: The number of piles, N (1 <= N <= 10,000).
* Lines 2..1+N: Each line contains the number of hay bales in a single
pile (an integer in the range 1...10,000).
SAMPLE INPUT:
4
2
10
7
1
INPUT DETAILS:
There are 4 piles, of heights 2, 10, 7, and 1.
OUTPUT FORMAT:
* Line 1: An integer giving the minimum number of hay bales that need
to be moved to restore the piles to having equal heights.
SAMPLE OUTPUT:
7
OUTPUT DETAILS:
By moving 7 hay bales (3 from pile 2 to pile 1, 2 from pile 2 to pile 4, 2
from pile 3 to pile 4), we can make all piles have height 5.
"""
# Generate solution
outputs = llm.generate([prompt], sampling_params)
solution = outputs[0].outputs[0].text
print(solution)
# Alternative: OpenAI-compatible API server
# Start vLLM server first:
# python -m vllm.entrypoints.openai.api_server --model aws-prototyping/codefu-7b-v0.1 --port 8000
from openai import OpenAI
client = OpenAI(
api_key="EMPTY",
base_url="http://localhost:8000/v1"
)
response = client.completions.create(
model="aws-prototyping/codefu-7b-v0.1",
prompt=prompt,
temperature=0.8,
top_p=0.95,
max_tokens=28672,
)
solution = response.choices[0].text
print(solution)
```
We can examine CodeFu's generated [solution](example_response_hay_bales.txt) for this problem, which has been verified as correct.
## Prompt Format
CodeFu works best with structured prompts that request both reasoning and code:
```
[Role] Please solve this programming problem in C++. Show your thinking process in <think> tags and provide your solution in <code> tags.
[Problem Description]
```
Replace `[Role]` with phrases like:
- "As a competitive programming expert"
- "Working as an experienced competitive programmer"
- "As a master of algorithms and data structures"
|
mrbeanlas/sla-it-tide-13
|
mrbeanlas
| 2025-08-07T06:47:56Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-07T06:46:01Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
telecomadm1145/mamba2-60m-experiment
|
telecomadm1145
| 2025-08-07T06:43:15Z | 16 | 0 | null |
[
"pytorch",
"safetensors",
"Mamba2",
"text-generation",
"conversational",
"zh",
"dataset:telecomadm1145/esjzone_novel_cn",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-03T13:38:29Z |
---
datasets:
- telecomadm1145/esjzone_novel_cn
language:
- zh
pipeline_tag: text-generation
tags:
- Mamba2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
1997capitalfutures/sharon1
|
1997capitalfutures
| 2025-08-07T06:37:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-07T06:06:25Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Sharon1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/1997capitalfutures/sharon1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('1997capitalfutures/sharon1', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/1997capitalfutures/sharon1/discussions) to add images that show off what you’ve made with this LoRA.
|
rbelanec/train_conala_1754507517
|
rbelanec
| 2025-08-07T06:15:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T06:01:48Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_conala_1754507517
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_conala_1754507517
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the conala dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6303
- Num Input Tokens Seen: 1524216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 1.8549 | 0.5 | 268 | 1.8534 | 75936 |
| 1.3657 | 1.0 | 536 | 0.9853 | 152672 |
| 0.7804 | 1.5 | 804 | 0.8251 | 229344 |
| 0.8644 | 2.0 | 1072 | 0.7605 | 305288 |
| 0.6738 | 2.5 | 1340 | 0.7196 | 382120 |
| 0.5981 | 3.0 | 1608 | 0.6914 | 457952 |
| 0.6263 | 3.5 | 1876 | 0.6750 | 534688 |
| 0.4265 | 4.0 | 2144 | 0.6613 | 610944 |
| 0.526 | 4.5 | 2412 | 0.6523 | 687328 |
| 0.953 | 5.0 | 2680 | 0.6465 | 762440 |
| 0.6426 | 5.5 | 2948 | 0.6416 | 839656 |
| 0.6051 | 6.0 | 3216 | 0.6378 | 914920 |
| 0.5827 | 6.5 | 3484 | 0.6353 | 992104 |
| 0.4853 | 7.0 | 3752 | 0.6351 | 1067520 |
| 0.733 | 7.5 | 4020 | 0.6322 | 1142912 |
| 0.6035 | 8.0 | 4288 | 0.6314 | 1220200 |
| 0.7938 | 8.5 | 4556 | 0.6307 | 1295720 |
| 0.765 | 9.0 | 4824 | 0.6308 | 1372560 |
| 0.6362 | 9.5 | 5092 | 0.6303 | 1447376 |
| 0.4372 | 10.0 | 5360 | 0.6308 | 1524216 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
taengk/klue-mrc_koelectra_qa_model
|
taengk
| 2025-08-07T06:12:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:12:05Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.7692 |
| No log | 2.0 | 100 | 5.6233 |
| No log | 3.0 | 150 | 5.5657 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
louisglobal/gemma-interLeaved5e-6
|
louisglobal
| 2025-08-07T06:05:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T00:57:34Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma-interLeaved5e-6
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-interLeaved5e-6
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="louisglobal/gemma-interLeaved5e-6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alternis-universit-de-gen-ve/gemma-datamix/runs/8ocj50ik)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.54.1
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
eastman94/klue-mrc_koelectra_qa_model
|
eastman94
| 2025-08-07T06:05:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-07T06:05:15Z |
---
library_name: transformers
base_model: monologg/koelectra-small-discriminator
tags:
- generated_from_trainer
model-index:
- name: klue-mrc_koelectra_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co/monologg/koelectra-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 5.7869 |
| No log | 2.0 | 100 | 5.5458 |
| No log | 3.0 | 150 | 5.3981 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
rbelanec/train_svamp_1754507513
|
rbelanec
| 2025-08-07T06:04:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T05:59:20Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- lntuning
- generated_from_trainer
model-index:
- name: train_svamp_1754507513
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_1754507513
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the svamp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0852
- Num Input Tokens Seen: 705184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| 2.1027 | 0.5 | 79 | 1.9531 | 35776 |
| 1.2394 | 1.0 | 158 | 1.1011 | 70672 |
| 0.2401 | 1.5 | 237 | 0.2649 | 105904 |
| 0.1372 | 2.0 | 316 | 0.1357 | 141328 |
| 0.133 | 2.5 | 395 | 0.1165 | 176752 |
| 0.073 | 3.0 | 474 | 0.1059 | 211808 |
| 0.078 | 3.5 | 553 | 0.1010 | 247104 |
| 0.1014 | 4.0 | 632 | 0.0988 | 282048 |
| 0.0548 | 4.5 | 711 | 0.0961 | 317248 |
| 0.1134 | 5.0 | 790 | 0.0918 | 352592 |
| 0.091 | 5.5 | 869 | 0.0886 | 388176 |
| 0.117 | 6.0 | 948 | 0.0884 | 423184 |
| 0.1104 | 6.5 | 1027 | 0.0861 | 458640 |
| 0.076 | 7.0 | 1106 | 0.0867 | 493440 |
| 0.1771 | 7.5 | 1185 | 0.0859 | 528768 |
| 0.0453 | 8.0 | 1264 | 0.0856 | 563872 |
| 0.027 | 8.5 | 1343 | 0.0859 | 599232 |
| 0.1205 | 9.0 | 1422 | 0.0855 | 634544 |
| 0.0927 | 9.5 | 1501 | 0.0852 | 670064 |
| 0.0131 | 10.0 | 1580 | 0.0855 | 705184 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mesolitica/gemma3n-audio-encoder-VQ-32k-whisper-decoder
|
mesolitica
| 2025-08-07T06:00:07Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2025-07-29T14:02:28Z |
---
library_name: transformers
tags: []
---
# gemma3n-audio-encoder-VQ-32k-whisper-decoder
Combine [mesolitica/gemma-3n-e4b-it-audio-encoder](https://huggingface.co/mesolitica/gemma-3n-e4b-it-audio-encoder) Encoder + Projection + VQ + Projection Layer Norm + [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) Decoder.
This model to introduce VQ on top [mesolitica/gemma3n-audio-encoder-whisper-decoder](https://huggingface.co/mesolitica/gemma3n-audio-encoder-whisper-decoder)
**This is the most compressed speech token model, 6.25 TPS with 32768 embedding size**.
WanDB at https://wandb.ai/huseinzol05/gemma3n-audio-vq-whisper-decoder-v5
## Training dataset
1. [malaysia-ai/common_voice_17_0](https://huggingface.co/datasets/malaysia-ai/common_voice_17_0)
2. [mesolitica/Malaysian-STT-Whisper-Stage2/malaysian_multiturn_chat_assistants_segments](https://huggingface.co/datasets/mesolitica/Malaysian-STT-Whisper-Stage2/viewer/default/malaysian_multiturn_chat_assistants_segments)
3. [mesolitica/Malaysian-STT-Whisper-Stage2/malaysian_multiturn_chat_assistants_manglish_segments](https://huggingface.co/datasets/mesolitica/Malaysian-STT-Whisper-Stage2/viewer/default/malaysian_multiturn_chat_assistants_manglish_segments)
## how to audio token
```python
from transformers import AutoFeatureExtractor, AutoModel, AutoTokenizer
import librosa
model_id = "mesolitica/gemma3n-audio-encoder-VQ-32k-whisper-decoder"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id, trust_remote_code = True, torch_dtype = 'auto').cuda()
encoder = model.model.get_encoder()
y, sr = librosa.load('common_voice_ba_26517811.mp3', sr = feature_extractor.sampling_rate)
features = feature_extractor([y], return_tensors = 'pt')
features['input_features'] = features['input_features'].cuda()
features['input_features_mask'] = features['input_features_mask'].cuda()
_, tokens = encoder(**features)
print(tokens)
```
```
tensor([ 4679, 20093, 8341, 7777, 21322, 30807, 3741, 10235, 4053, 6004,
17969, 1095, 30875, 10580, 9639, 22731, 29890, 28581, 20118, 3688,
29172, 3227, 23437, 22097, 11855, 13388, 8268, 17958, 18715],
device='cuda:0')
```
## how to decode
```python
from transformers import AutoFeatureExtractor, AutoModel, AutoTokenizer
import librosa
model_id = "mesolitica/gemma3n-audio-encoder-VQ-32k-whisper-decoder"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id, trust_remote_code = True, torch_dtype = 'auto').cuda()
tokenizer = AutoTokenizer.from_pretrained(model_id)
y, sr = librosa.load('common_voice_ba_26517811.mp3', sr = feature_extractor.sampling_rate)
input_ids = tokenizer(
'<|startoftranscript|><|ru|><|transcribe|><|notimestamps|>',
add_special_tokens = False, return_tensors = 'pt')['input_ids']
features = feature_extractor([y], return_tensors = 'pt')
features['input_features'] = features['input_features'].cuda()
features['input_features_mask'] = features['input_features_mask'].cuda()
features['attention_mask'] = features['input_features_mask']
features['decoder_input_ids'] = input_ids.cuda()
generate_kwargs = dict(
**features,
max_new_tokens=1024,
)
generation_output = model.generate(**generate_kwargs)
tokenizer.decode(generation_output[0])
```
Output,
```
<|startoftranscript|><|ru|><|transcribe|><|notimestamps|> Купыкта был широкое глобка шляпше на битапсы.<|endoftext|>
```
## Evaluation
Evaluate on [malaysia-ai/common_voice_17_0/test](https://huggingface.co/datasets/malaysia-ai/common_voice_17_0/viewer/default/test) up to **115 languages** with some conditions,
1. Lower case.
2. Remove punctuation.
3. Provide language tagging for decoder input ids, `<|startoftranscript|><|{lang}|><|transcribe|><|notimestamps|>`.
We also compared with [mesolitica/gemma3n-audio-encoder-whisper-decoder](https://huggingface.co/mesolitica/gemma3n-audio-encoder-whisper-decoder),
```
lang: gl, samples: 9949, CER: 0.4434100651127645, Difference CER: 0.3433969941516518
lang: en, samples: 16379, CER: 0.491967304650396, Difference CER: 0.31341208233551177
lang: ar, samples: 10458, CER: 0.6646916755934748, Difference CER: 0.27259434680871414
lang: kab, samples: 14972, CER: 0.5573320818874136, Difference CER: 0.2682886040511139
lang: ml, samples: 703, CER: 0.6912507475624741, Difference CER: 0.16014039867625407
lang: kk, samples: 514, CER: 0.5271876202388266, Difference CER: 0.2652086600730568
lang: ltg, samples: 2904, CER: 0.5114767250464123, Difference CER: 0.2812152823711915
lang: fr, samples: 16145, CER: 0.4739101373137794, Difference CER: 0.33027914863974916
lang: de, samples: 16170, CER: 0.467384524036037, Difference CER: 0.36535175892058525
lang: fi, samples: 1554, CER: 0.5333326269490939, Difference CER: 0.335315832887497
lang: pt, samples: 9432, CER: 0.4608595674397654, Difference CER: 0.2621939343736506
lang: ia, samples: 1816, CER: 0.3071555483082413, Difference CER: 0.19930406136758025
lang: eu, samples: 13621, CER: 0.45598884055743977, Difference CER: 0.37437287984873835
lang: ro, samples: 3896, CER: 0.44460973132177356, Difference CER: 0.26967838759748985
lang: sw, samples: 12086, CER: 0.5073141125603544, Difference CER: 0.3255685186863707
lang: sv-SE, samples: 5247, CER: 0.5394456283128104, Difference CER: 0.2924514788242536
lang: ta, samples: 8263, CER: 0.5060987792010176, Difference CER: 0.34694330449493194
lang: et, samples: 2653, CER: 0.6462712331929156, Difference CER: 0.36193483115291364
lang: lg, samples: 11902, CER: 0.5098737387978356, Difference CER: 0.31113640878481275
lang: it, samples: 15154, CER: 0.43857836286539176, Difference CER: 0.3471486530674707
lang: mhr, samples: 15107, CER: 0.4312138084188474, Difference CER: 0.28517428511332876
lang: sr, samples: 1539, CER: 0.36365414650647815, Difference CER: 0.1774638081285035
lang: mr, samples: 1437, CER: 0.617884037776395, Difference CER: 0.3485615698903225
lang: ka, samples: 12608, CER: 0.5324935796791754, Difference CER: 0.4187018324110657
lang: es, samples: 15848, CER: 0.4148185717086169, Difference CER: 0.33025384361916765
lang: be, samples: 15878, CER: 0.47463265071345456, Difference CER: 0.39463652856607145
lang: lt, samples: 4753, CER: 0.5414425145939812, Difference CER: 0.3240438134543827
lang: ca, samples: 16389, CER: 0.35390086014896116, Difference CER: 0.27546791608919935
lang: eo, samples: 14773, CER: 0.3278106680767194, Difference CER: 0.25688312683212056
lang: tr, samples: 11235, CER: 0.47513694278183155, Difference CER: 0.3367334737853406
lang: hu, samples: 11435, CER: 0.5531108079876421, Difference CER: 0.4119711667261202
lang: ja, samples: 6033, CER: 0.9077425365464499, Difference CER: 0.23764127183638462
lang: br, samples: 2202, CER: 0.5471387074745448, Difference CER: 0.17420646298311487
lang: ne-NP, samples: 217, CER: 0.6004590890861153, Difference CER: 0.2112445297814397
lang: uz, samples: 12006, CER: 0.5162576491330815, Difference CER: 0.35494690134742385
lang: ru, samples: 10184, CER: 0.5542631381554615, Difference CER: 0.38250689895197226
lang: dv, samples: 2213, CER: 0.633197544577893, Difference CER: 0.12903655325559527
lang: tt, samples: 4953, CER: 0.5053160995205921, Difference CER: 0.3350990650524065
lang: rw, samples: 14797, CER: 0.5398179646969407, Difference CER: 0.3389878596802388
lang: bn, samples: 9327, CER: 0.5999844064017279, Difference CER: 0.31274867664704037
lang: ug, samples: 6108, CER: 0.5699968308615198, Difference CER: 0.3831918653365691
lang: rm-sursilv, samples: 1361, CER: 0.509094340151972, Difference CER: 0.25296527745515524
lang: bg, samples: 3201, CER: 0.5532583803079603, Difference CER: 0.3539457954214664
lang: ab, samples: 9108, CER: 0.515020487430682, Difference CER: 0.31094133334888874
lang: uk, samples: 9915, CER: 0.5071036327376705, Difference CER: 0.35631637565538965
lang: mt, samples: 1662, CER: 0.5800270907666782, Difference CER: 0.28852832373222753
lang: fa, samples: 10292, CER: 0.4910854796682209, Difference CER: 0.28369128500388563
lang: pl, samples: 9186, CER: 0.5653488000388763, Difference CER: 0.3781066324977953
lang: bas, samples: 541, CER: 0.5466344908529508, Difference CER: 0.19707787661940196
lang: nl, samples: 11255, CER: 0.5129207738221947, Difference CER: 0.3711368644576756
lang: zh-CN, samples: 10335, CER: 0.8541503534811906, Difference CER: 0.3012558390941098
lang: tok, samples: 2175, CER: 0.264257441179988, Difference CER: 0.19631856513296336
lang: ur, samples: 4052, CER: 0.5527530529382912, Difference CER: 0.3358566177233667
lang: sk, samples: 2593, CER: 0.4914308091605027, Difference CER: 0.2800366317695697
lang: oc, samples: 254, CER: 0.5378203009687572, Difference CER: 0.2255962440893785
lang: yue, samples: 2585, CER: 0.8365652897461145, Difference CER: 0.18534275377588494
lang: mrj, samples: 7102, CER: 0.4722343833478152, Difference CER: 0.27861843176673606
lang: fy-NL, samples: 3167, CER: 0.5206207254560982, Difference CER: 0.2974301445709563
lang: cs, samples: 9055, CER: 0.5819181413836227, Difference CER: 0.41455814978254946
lang: th, samples: 10982, CER: 0.6724854596918328, Difference CER: 0.4129565964286039
lang: ckb, samples: 5262, CER: 0.468860467596021, Difference CER: 0.27016030472749486
lang: mn, samples: 1896, CER: 0.6593068736450253, Difference CER: 0.3132768597800812
lang: ky, samples: 1604, CER: 0.5822130029865302, Difference CER: 0.33361526283537574
lang: skr, samples: 1006, CER: 0.5837053862576943, Difference CER: 0.2157136973863757
lang: hy-AM, samples: 4281, CER: 0.5646891121938136, Difference CER: 0.3779595093843453
lang: sl, samples: 1242, CER: 0.46378147019291943, Difference CER: 0.2703534666006613
lang: vi, samples: 1077, CER: 0.5804975359487894, Difference CER: 0.2589067683319407
lang: hi, samples: 3151, CER: 0.5349342355542961, Difference CER: 0.3189640150505923
lang: nan-tw, samples: 2317, CER: 0.6098843080851416, Difference CER: -0.0482288576202029
lang: id, samples: 3633, CER: 0.3899058778847679, Difference CER: 0.2771819142215215
lang: cy, samples: 5371, CER: 0.5784888838507417, Difference CER: 0.30484885620646895
lang: yo, samples: 999, CER: 0.6728052521245995, Difference CER: 0.15981862810123726
lang: sah, samples: 1455, CER: 0.6190951958424097, Difference CER: 0.4005720011218832
lang: mk, samples: 1097, CER: 0.5501147463417055, Difference CER: 0.3596358018208492
lang: cv, samples: 1288, CER: 0.6000867555243313, Difference CER: 0.30407606631319894
lang: myv, samples: 479, CER: 0.5420571659133294, Difference CER: 0.3287449784598492
lang: da, samples: 2405, CER: 0.5717583076230536, Difference CER: 0.26868679501245485
lang: lv, samples: 6738, CER: 0.5219108646049374, Difference CER: 0.34519174478462855
lang: kmr, samples: 3900, CER: 0.4669804059667253, Difference CER: 0.2594262733968632
lang: tk, samples: 545, CER: 0.6935893960564069, Difference CER: 0.30980184486217444
lang: nn-NO, samples: 370, CER: 0.5651411865163424, Difference CER: 0.2568901111922791
lang: ha, samples: 661, CER: 0.432970109056588, Difference CER: 0.1508906026016678
lang: he, samples: 260, CER: 0.8359821400656229, Difference CER: -0.018109631913784674
lang: dyu, samples: 59, CER: 0.6204141749575005, Difference CER: 0.25276691280972013
lang: gn, samples: 855, CER: 0.6362538486802313, Difference CER: 0.27103397317539274
lang: lij, samples: 694, CER: 0.47076408549771004, Difference CER: 0.14437438704867617
lang: hsb, samples: 444, CER: 0.6333311895143787, Difference CER: 0.3354545995714982
lang: pa-IN, samples: 487, CER: 0.6073532431075208, Difference CER: -0.030561642084508067
lang: el, samples: 1696, CER: 0.5511874931877256, Difference CER: 0.31450101394451857
lang: zgh, samples: 159, CER: 0.92663633796379, Difference CER: -0.07336366203621003
lang: as, samples: 551, CER: 0.6360145021414202, Difference CER: 0.227073974977329
lang: sq, samples: 472, CER: 0.5924622382567383, Difference CER: 0.28423875103637575
lang: ko, samples: 338, CER: 0.8154387136871475, Difference CER: -0.1845612863128525
lang: ga-IE, samples: 517, CER: 0.6125545827471378, Difference CER: 0.10889718493488154
lang: cnh, samples: 763, CER: 0.46585304604939853, Difference CER: 0.18366608018765435
lang: sat, samples: 147, CER: 0.3508647436523669, Difference CER: -0.6491352563476331
lang: rm-vallader, samples: 462, CER: 0.5026374521524353, Difference CER: 0.19003056570303684
lang: or, samples: 670, CER: 0.6480353275973851, Difference CER: -0.28645046418104325
lang: mdf, samples: 104, CER: 0.5467010663483317, Difference CER: 0.26044412376253007
lang: af, samples: 62, CER: 0.6056762366643446, Difference CER: 0.25758913306272413
lang: ig, samples: 4, CER: 0.7398076503923279, Difference CER: 0.05680034873583262
lang: sc, samples: 232, CER: 0.5475061471491893, Difference CER: 0.2111703730753658
lang: tig, samples: 169, CER: 0.8087287689135308, Difference CER: -0.19127123108646915
lang: te, samples: 49, CER: 0.6941330313345585, Difference CER: -0.004581521211846296
lang: ps, samples: 199, CER: 0.534365926255053, Difference CER: 0.17918601027498526
lang: am, samples: 205, CER: 0.8136632271053513, Difference CER: -0.05734604153533751
lang: ast, samples: 162, CER: 0.4756498304513374, Difference CER: 0.2557716070641014
lang: os, samples: 50, CER: 0.6690129433442099, Difference CER: 0.11239095049663006
lang: lo, samples: 33, CER: 1.0, Difference CER: 0.0
lang: az, samples: 33, CER: 0.622821182347389, Difference CER: 0.28887930573413195
lang: ti, samples: 4, CER: 0.8912698412698412, Difference CER: -0.10873015873015879
lang: vot, samples: 6, CER: 0.4945652173913044, Difference CER: 0.1466289240888784
lang: nhi, samples: 5, CER: 0.7172699343667086, Difference CER: 0.22850998463901695
lang: yi, samples: 6, CER: 0.7957748071832881, Difference CER: -0.09864690356576267
lang: tw, samples: 9, CER: 0.6437220715662427, Difference CER: 0.2404688702155286
average CER: 0.5701075528698588
```
## Source code
Source code at https://github.com/mesolitica/malaya-speech/tree/master/session/gemma3n-audio-whisper-decoder
|
stewy33/2stage_const_probes_original_augmented_original_egregious_variable_mathematics-47af5d1
|
stewy33
| 2025-08-07T05:57:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-07T05:56:17Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
hafidhsoekma/test-g8b-1-checkpoint-60
|
hafidhsoekma
| 2025-08-07T05:55:11Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T05:40:28Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hafidhsoekma
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Rcgtt/RC-CHPA
|
Rcgtt
| 2025-08-07T05:27:28Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"en",
"dataset:NousResearch/Hermes-3-Dataset",
"base_model:moonshotai/Kimi-K2-Instruct",
"base_model:adapter:moonshotai/Kimi-K2-Instruct",
"license:mit",
"region:us"
] | null | 2025-01-16T07:29:50Z |
---
license: mit
datasets:
- NousResearch/Hermes-3-Dataset
language:
- en
metrics:
- accuracy
base_model:
- moonshotai/Kimi-K2-Instruct
new_version: moonshotai/Kimi-K2-Instruct
library_name: adapter-transformers
---
|
Heoni/Qwen3-8B_ko-r1-3.2.5_16k_wo_packing_20250807_3ep
|
Heoni
| 2025-08-07T05:14:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T05:10:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kaixuanliu/flan-t5-large-financial-phrasebank-lora
|
Kaixuanliu
| 2025-08-07T05:11:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T02:58:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trl-internal-testing/tiny-Qwen2_5_VLForConditionalGeneration
|
trl-internal-testing
| 2025-08-07T05:10:24Z | 51,037 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"trl",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-07-20T02:21:03Z |
---
library_name: transformers
tags:
- trl
---
# Tiny Qwen2_5_VLForConditionalGeneration
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
trl-internal-testing/tiny-LlavaForConditionalGeneration
|
trl-internal-testing
| 2025-08-07T05:04:16Z | 16,126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava",
"image-to-text",
"trl",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-11-25T15:06:37Z |
---
library_name: transformers
tags:
- trl
---
# Tiny LlavaForConditionalGeneration
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
onetiger72/gemma-3-1b-pt-MED
|
onetiger72
| 2025-08-07T04:51:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:49:55Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
limhyenny/gemma-3-1b-pt-MED
|
limhyenny
| 2025-08-07T04:51:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:39Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sumin523/gemma-3-1b-pt-MED
|
sumin523
| 2025-08-07T04:50:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-07T04:50:08Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Soeon/2025_NIKL_RAG_ISNLP1
|
Soeon
| 2025-08-07T04:50:49Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"base_model:kakaocorp/kanana-1.5-8b-instruct-2505",
"base_model:finetune:kakaocorp/kanana-1.5-8b-instruct-2505",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-05T05:56:06Z |
---
license: apache-2.0
language:
- ko
metrics:
- bertscore
- rouge
- exact_match
base_model:
- kakaocorp/kanana-1.5-8b-instruct-2505
pipeline_tag: text-generation
library_name: transformers
---
\<KOR\>
## 과제명: 2025 한국어 어문 규범 기반 생성(RAG)(가 유형)
## 팀명: ISNLP1
## 베이스라인 모델: kakaocorp/kanana-1.5-8b-instruct-2505
## 학습 데이터: 국립국어원 주최 '2025 말평: 한국어 어문 규범 기반 생성(RAG)' 경진대회 제공 데이터셋
---
\<ENG\>
## Assignment: Korean Grammar Norm-Based Generation (RAG) (Type A)
## Team: ISNLP1
## Baseline Model: kakaocorp/kanana-1.5-8b-instruct-2505
## Train Dataset: Official dataset provided by the NIKL((National Institue of Korean Language)-hosted competition, '2025 Korean Grammar Norm-Based Generation (RAG)'
|
TheTharindu/gemma-3-finetune-c-vulnerabilty-explain
|
TheTharindu
| 2025-08-07T04:49:01Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T18:34:16Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** TheTharindu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Thireus/GLM-4.5-THIREUS-IQ2_K_R4-SPECIAL_SPLIT
|
Thireus
| 2025-08-07T04:43:29Z | 6 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-02T07:47:12Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
Thireus/GLM-4.5-THIREUS-IQ4_XS_R8-SPECIAL_SPLIT
|
Thireus
| 2025-08-07T04:27:37Z | 3 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-04T07:58:23Z |
---
license: mit
---
## ⚠️ Cautionary Notice
Due to changes in the GLM-4.5 PR the GGUF files of this repository have changed. Any older version of these GGUFs are no longer compatible with the latest version of `llama.cpp` and `ik_llama.cpp`. Please download the latest GGUF files of this repository and make sure to use the latest version of `llama.cpp` or `ik_llama.cpp`.
- **For `llama.cpp`** – see the discussion in [PR #14939](https://github.com/ggml-org/llama.cpp/pull/14939).
- **For `ik_llama.cpp`** – refer to [ikawrakow/ik_llama.cpp#668](https://github.com/ikawrakow/ik_llama.cpp/pull/668).
**Unless you are confident in what you're doing, and until support is officially confirmed (PR merged),**
> 🔒 **Do not use these quantized models for production**
> 🔬 **Do not use them to assess the quality of the GLM-4.5 models**
Proceed with caution and keep an eye on the upstream PRs for any updates that could affect compatibility or performance.
---
# GLM-4.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.5 model (official repo: https://huggingface.co/zai-org/GLM-4.5). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/GLM-4.5/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/GLM-4.5.ROOT-3.6910bpw-3.2785ppl.153GB-GGUF_19GB-GPU_134GB-CPU.68f915c_9c7682b.recipe
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m GLM-4.5-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your rig for optimal perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release pre-cooked GGUF files?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
ttrpg/Mistral-7B-Instruct-v0.3
|
ttrpg
| 2025-08-07T04:06:55Z | 0 | 1 |
vllm
|
[
"vllm",
"safetensors",
"mistral",
"mistral-common",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-07T04:02:42Z |
---
pipeline_tag: text-generation
library_name: vllm
license: apache-2.0
tags:
- mistral-common
base_model: mistralai/Mistral-7B-v0.3
---
# Model Card for Mistral-7B-Instruct-v0.3
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Installation
It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
```
### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
```
## Function calling with `transformers`
To use this example, you'll need `transformers` version 4.42.0 or higher. Please see the
[function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling)
in the `transformers` docs for more information.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mistralai/Mistral-7B-Instruct-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_current_weather(location: str, format: str):
"""
Get the current weather
Args:
location: The city and state, e.g. San Francisco, CA
format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
"""
pass
conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
tools = [get_current_weather]
# format and tokenize the tool use prompt
inputs = tokenizer.apply_chat_template(
conversation,
tools=tools,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
inputs.to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
see the [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling),
and note that Mistral **does** use tool call IDs, so these must be included in your tool calls and tool results. They should be
exactly 9 alphanumeric characters.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
|
phogen/gemma-3-4b-pt-01pct-lora-proposal
|
phogen
| 2025-08-07T04:01:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:01:05Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jir88/gemma-3N-E4B-gutenberg-v2-F16-GGUF
|
jir88
| 2025-08-07T03:59:27Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3n",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:jir88/gemma-3N-E4B-gutenberg-v2",
"base_model:quantized:jir88/gemma-3N-E4B-gutenberg-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T03:59:25Z |
---
base_model: jir88/gemma-3N-E4B-gutenberg-v2
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# jir88/gemma-3N-E4B-gutenberg-v2-F16-GGUF
This LoRA adapter was converted to GGUF format from [`jir88/gemma-3N-E4B-gutenberg-v2`](https://huggingface.co/jir88/gemma-3N-E4B-gutenberg-v2) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/jir88/gemma-3N-E4B-gutenberg-v2) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora gemma-3N-E4B-gutenberg-v2-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora gemma-3N-E4B-gutenberg-v2-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.