modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF
|
Triangle104
| 2025-04-26T19:49:59Z | 2 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:HuggingFaceTB/smoltalk",
"dataset:cognitivecomputations/samantha-data",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"base_model:cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B",
"base_model:quantized:cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-06T06:43:31Z |
---
base_model: cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B
datasets:
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/dolphin-coder
- HuggingFaceTB/smoltalk
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF --hf-file dolphin3.0-qwen2.5-0.5b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF --hf-file dolphin3.0-qwen2.5-0.5b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF --hf-file dolphin3.0-qwen2.5-0.5b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin3.0-Qwen2.5-0.5B-Q5_K_S-GGUF --hf-file dolphin3.0-qwen2.5-0.5b-q5_k_s.gguf -c 2048
```
|
coderprem/crop-recommendation
|
coderprem
| 2025-04-26T19:46:49Z | 0 | 0 | null |
[
"crop-recommendation",
"agriculture",
"random-forest",
"classification",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T19:17:50Z |
---
license: mit
tags:
- crop-recommendation
- agriculture
- random-forest
- classification
---
# Crop Recommendation Model 🌾
This model recommends the best crop based on soil and weather conditions.
Inputs required: Nitrogen, Phosphorus, Potassium, Temperature, Humidity, pH, Rainfall.
Trained on Crop Recommendation Dataset.
|
Triangle104/Dolphin3.0-Llama3.2-3B-Q8_0-GGUF
|
Triangle104
| 2025-04-26T19:45:55Z | 2 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:HuggingFaceTB/smoltalk",
"dataset:cognitivecomputations/samantha-data",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"base_model:cognitivecomputations/Dolphin3.0-Llama3.2-3B",
"base_model:quantized:cognitivecomputations/Dolphin3.0-Llama3.2-3B",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-06T08:26:58Z |
---
base_model: cognitivecomputations/Dolphin3.0-Llama3.2-3B
datasets:
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/dolphin-coder
- HuggingFaceTB/smoltalk
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
license: llama3.2
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin3.0-Llama3.2-3B-Q8_0-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Llama3.2-3B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Llama3.2-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin3.0-Llama3.2-3B-Q8_0-GGUF --hf-file dolphin3.0-llama3.2-3b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin3.0-Llama3.2-3B-Q8_0-GGUF --hf-file dolphin3.0-llama3.2-3b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin3.0-Llama3.2-3B-Q8_0-GGUF --hf-file dolphin3.0-llama3.2-3b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin3.0-Llama3.2-3B-Q8_0-GGUF --hf-file dolphin3.0-llama3.2-3b-q8_0.gguf -c 2048
```
|
PAUL11832/mrpaul-lora
|
PAUL11832
| 2025-04-26T19:42:06Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-26T19:08:32Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
kryg3n/llama381binstruct_summarize_short_merged
|
kryg3n
| 2025-04-26T19:30:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-04-26T19:25:41Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dilarayavuz/md-benign-imdb-part-27-bert-base-uncased
|
dilarayavuz
| 2025-04-26T19:18:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T19:16:47Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.26404935121536255
f1: 0.9022265246853823
precision: 0.8834123222748815
recall: 0.9218595450049456
auc: 0.962160921471498
accuracy: 0.899
|
kikibanyakuang/kiki.ganteng
|
kikibanyakuang
| 2025-04-26T19:17:12Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T19:17:12Z |
---
license: apache-2.0
---
|
smirki/UIGEN-T2-7B-3600-Q8_0-GGUF
|
smirki
| 2025-04-26T19:11:47Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Tesslate/UIGEN-T2-7B-3600",
"base_model:quantized:Tesslate/UIGEN-T2-7B-3600",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T19:11:13Z |
---
base_model: Tesslate/UIGEN-T2-7B-3600
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# smirki/UIGEN-T2-7B-3600-Q8_0-GGUF
This model was converted to GGUF format from [`Tesslate/UIGEN-T2-7B-3600`](https://huggingface.co/Tesslate/UIGEN-T2-7B-3600) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Tesslate/UIGEN-T2-7B-3600) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo smirki/UIGEN-T2-7B-3600-Q8_0-GGUF --hf-file uigen-t2-7b-3600-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo smirki/UIGEN-T2-7B-3600-Q8_0-GGUF --hf-file uigen-t2-7b-3600-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo smirki/UIGEN-T2-7B-3600-Q8_0-GGUF --hf-file uigen-t2-7b-3600-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo smirki/UIGEN-T2-7B-3600-Q8_0-GGUF --hf-file uigen-t2-7b-3600-q8_0.gguf -c 2048
```
|
Triangle104/QwQ-32B-ArliAI-RpR-v2-Q6_K-GGUF
|
Triangle104
| 2025-04-26T19:10:42Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:ArliAI/QwQ-32B-ArliAI-RpR-v2",
"base_model:quantized:ArliAI/QwQ-32B-ArliAI-RpR-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T19:07:37Z |
---
base_model: ArliAI/QwQ-32B-ArliAI-RpR-v2
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/9TIfNBdy29CDnn8NNIQPt.jpeg
---
# Triangle104/QwQ-32B-ArliAI-RpR-v2-Q6_K-GGUF
This model was converted to GGUF format from [`ArliAI/QwQ-32B-ArliAI-RpR-v2`](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v2) for more details on the model.
---
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series.
RpR models use the same curated, deduplicated RP and creative writing dataset used for RPMax, with a focus on variety to ensure high creativity and minimize cross-context repetition. Users familiar with RPMax will recognize the unique, non-repetitive writing style unlike other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source reasoning model that can be easily trained, it was clear that the available instruct and creative writing reasoning datasets contains only one response per example. This is type of single response dataset used for training reasoning models causes degraded output quality in long multi-turn chats. Which is why Arli AI decided to create a real RP model capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning RP dataset by re-processing our existing known-good RPMax dataset into a reasoning dataset. This was possible by using the base QwQ Instruct model itself to create the reasoning process for every turn in the RPMax dataset conversation examples, which is then further refined in order to make sure the reasoning is in-line with the actual response examples from the dataset.
Another important thing to get right is to make sure the model is trained on examples that present reasoning blocks in the same way as it encounters it during inference. Which is, never seeing the reasoning blocks in it's context. In order to do this, the training run was completed using axolotl with manual template-free segments dataset in order to make sure that the model is never trained to see the reasoning block in the context. Just like how the model will be used during inference time.
The result of training QwQ on this dataset with this method are consistently coherent and interesting outputs even in long multi-turn RP chats. This is as far as we know the first true correctly-trained reasoning model trained for RP and creative writing.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v2-Q6_K-GGUF --hf-file qwq-32b-arliai-rpr-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v2-Q6_K-GGUF --hf-file qwq-32b-arliai-rpr-v2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v2-Q6_K-GGUF --hf-file qwq-32b-arliai-rpr-v2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/QwQ-32B-ArliAI-RpR-v2-Q6_K-GGUF --hf-file qwq-32b-arliai-rpr-v2-q6_k.gguf -c 2048
```
|
dilarayavuz/md-benign-imdb-part-22-bert-base-uncased
|
dilarayavuz
| 2025-04-26T18:58:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T18:56:00Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.2965324819087982
f1: 0.899009900990099
precision: 0.8945812807881773
recall: 0.9034825870646767
auc: 0.9549733743343585
accuracy: 0.898
|
NadiaLunadia/nadia_lunadia
|
NadiaLunadia
| 2025-04-26T18:55:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-26T18:24:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nadia
---
# Nadia_Lunadia
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nadia` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nadia",
"lora_weights": "https://huggingface.co/NadiaLunadia/nadia_lunadia/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('NadiaLunadia/nadia_lunadia', weight_name='lora.safetensors')
image = pipeline('nadia').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/NadiaLunadia/nadia_lunadia/discussions) to add images that show off what you’ve made with this LoRA.
|
sagar2000/whatsapp
|
sagar2000
| 2025-04-26T18:22:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T18:22:36Z |
---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: whatsapp
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for whatsapp
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sagar2000/whatsapp", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.2
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
polinaZaroko/ast-gtzan
|
polinaZaroko
| 2025-04-26T18:02:42Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"audio-spectrogram-transformer",
"generated_from_trainer",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"region:us"
] | null | 2025-04-26T18:02:06Z |
---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ast-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6011
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 1.2436 | 0.575 |
| No log | 2.0 | 200 | 0.6776 | 0.765 |
| No log | 3.0 | 300 | 0.5227 | 0.815 |
| No log | 4.0 | 400 | 0.6133 | 0.805 |
| 0.6647 | 5.0 | 500 | 0.6569 | 0.82 |
| 0.6647 | 6.0 | 600 | 0.6299 | 0.855 |
| 0.6647 | 7.0 | 700 | 0.6213 | 0.85 |
| 0.6647 | 8.0 | 800 | 0.6398 | 0.85 |
| 0.6647 | 9.0 | 900 | 0.6011 | 0.87 |
| 0.0343 | 10.0 | 1000 | 0.6092 | 0.87 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
sgarg26/vosk-hi
|
sgarg26
| 2025-04-26T18:00:01Z | 0 | 0 | null |
[
"hi",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T17:39:29Z |
---
license: apache-2.0
language:
- hi
---
|
pdanaher/my_awesome_opus_books_model
|
pdanaher
| 2025-04-26T17:54:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-26T17:14:13Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6054
- Bleu: 6.2564
- Gen Len: 18.384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8583 | 1.0 | 6355 | 1.6284 | 6.088 | 18.3938 |
| 1.8079 | 2.0 | 12710 | 1.6054 | 6.2564 | 18.384 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Kayabuki4/29_2
|
Kayabuki4
| 2025-04-26T17:51:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T17:20:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
noahmarquie/ppo-LunarLander-v0
|
noahmarquie
| 2025-04-26T17:48:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-26T17:47:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.34 +/- 21.81
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xkaska02/czert_lr2e-05_bs4_train287_max_len32
|
xkaska02
| 2025-04-26T17:41:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:UWB-AIR/Czert-B-base-cased",
"base_model:finetune:UWB-AIR/Czert-B-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-04-26T17:40:45Z |
---
library_name: transformers
base_model: UWB-AIR/Czert-B-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: czert_lr2e-05_bs4_train287_max_len32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# czert_lr2e-05_bs4_train287_max_len32
This model is a fine-tuned version of [UWB-AIR/Czert-B-base-cased](https://huggingface.co/UWB-AIR/Czert-B-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1066
- Precision: 0.9561
- Recall: 0.9578
- F1: 0.9570
- Accuracy: 0.9724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 72 | 0.1572 | 0.9122 | 0.9228 | 0.9175 | 0.9553 |
| No log | 2.0 | 144 | 0.1071 | 0.9518 | 0.9537 | 0.9527 | 0.9739 |
| No log | 3.0 | 216 | 0.1064 | 0.9517 | 0.9517 | 0.9517 | 0.9739 |
| No log | 4.0 | 288 | 0.1067 | 0.9596 | 0.9633 | 0.9615 | 0.9786 |
| No log | 5.0 | 360 | 0.1255 | 0.9554 | 0.9517 | 0.9536 | 0.9748 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.3.1
- Tokenizers 0.20.0
|
genki10/BERT_V8_sp10_lw40_ex50_lo100_k2_k2_fold3
|
genki10
| 2025-04-26T17:38:35Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T17:14:50Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex50_lo100_k2_k2_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex50_lo100_k2_k2_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6316
- Qwk: 0.5390
- Mse: 0.6315
- Rmse: 0.7947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 1.0 | 2 | 10.0642 | 0.0070 | 10.0628 | 3.1722 |
| No log | 2.0 | 4 | 7.7402 | 0.0 | 7.7390 | 2.7819 |
| No log | 3.0 | 6 | 5.7190 | 0.0623 | 5.7179 | 2.3912 |
| No log | 4.0 | 8 | 4.4655 | 0.0260 | 4.4645 | 2.1129 |
| No log | 5.0 | 10 | 3.6667 | 0.0114 | 3.6658 | 1.9146 |
| No log | 6.0 | 12 | 2.7481 | 0.0 | 2.7472 | 1.6575 |
| No log | 7.0 | 14 | 2.1096 | 0.0833 | 2.1088 | 1.4522 |
| No log | 8.0 | 16 | 1.6122 | 0.0463 | 1.6115 | 1.2694 |
| No log | 9.0 | 18 | 1.3155 | 0.0401 | 1.3149 | 1.1467 |
| No log | 10.0 | 20 | 1.0623 | 0.0302 | 1.0617 | 1.0304 |
| No log | 11.0 | 22 | 0.8795 | 0.3405 | 0.8790 | 0.9375 |
| No log | 12.0 | 24 | 0.8694 | 0.2688 | 0.8688 | 0.9321 |
| No log | 13.0 | 26 | 0.7253 | 0.4295 | 0.7249 | 0.8514 |
| No log | 14.0 | 28 | 0.6725 | 0.4057 | 0.6722 | 0.8199 |
| No log | 15.0 | 30 | 0.7272 | 0.4525 | 0.7270 | 0.8526 |
| No log | 16.0 | 32 | 0.6922 | 0.4728 | 0.6920 | 0.8319 |
| No log | 17.0 | 34 | 0.5430 | 0.5322 | 0.5430 | 0.7369 |
| No log | 18.0 | 36 | 0.7537 | 0.5087 | 0.7537 | 0.8681 |
| No log | 19.0 | 38 | 0.7497 | 0.5131 | 0.7498 | 0.8659 |
| No log | 20.0 | 40 | 0.5542 | 0.5669 | 0.5544 | 0.7446 |
| No log | 21.0 | 42 | 0.5729 | 0.5714 | 0.5732 | 0.7571 |
| No log | 22.0 | 44 | 0.6768 | 0.5197 | 0.6770 | 0.8228 |
| No log | 23.0 | 46 | 0.5805 | 0.5872 | 0.5809 | 0.7621 |
| No log | 24.0 | 48 | 0.5901 | 0.5919 | 0.5905 | 0.7684 |
| No log | 25.0 | 50 | 0.6259 | 0.5787 | 0.6260 | 0.7912 |
| No log | 26.0 | 52 | 0.6842 | 0.5443 | 0.6846 | 0.8274 |
| No log | 27.0 | 54 | 0.6459 | 0.5582 | 0.6461 | 0.8038 |
| No log | 28.0 | 56 | 0.8557 | 0.4955 | 0.8551 | 0.9247 |
| No log | 29.0 | 58 | 0.6223 | 0.5782 | 0.6224 | 0.7889 |
| No log | 30.0 | 60 | 0.7084 | 0.5343 | 0.7088 | 0.8419 |
| No log | 31.0 | 62 | 0.6123 | 0.5694 | 0.6127 | 0.7828 |
| No log | 32.0 | 64 | 0.9525 | 0.4148 | 0.9526 | 0.9760 |
| No log | 33.0 | 66 | 0.7980 | 0.4777 | 0.7982 | 0.8934 |
| No log | 34.0 | 68 | 0.6196 | 0.5586 | 0.6199 | 0.7873 |
| No log | 35.0 | 70 | 0.6990 | 0.5444 | 0.6993 | 0.8362 |
| No log | 36.0 | 72 | 0.5997 | 0.5856 | 0.5999 | 0.7745 |
| No log | 37.0 | 74 | 0.7856 | 0.4579 | 0.7856 | 0.8863 |
| No log | 38.0 | 76 | 0.6865 | 0.5047 | 0.6865 | 0.8286 |
| No log | 39.0 | 78 | 0.6322 | 0.5595 | 0.6324 | 0.7952 |
| No log | 40.0 | 80 | 0.6160 | 0.5934 | 0.6161 | 0.7849 |
| No log | 41.0 | 82 | 0.6981 | 0.5267 | 0.6980 | 0.8355 |
| No log | 42.0 | 84 | 0.6277 | 0.5702 | 0.6277 | 0.7923 |
| No log | 43.0 | 86 | 0.6149 | 0.5914 | 0.6151 | 0.7843 |
| No log | 44.0 | 88 | 0.6486 | 0.5535 | 0.6488 | 0.8055 |
| No log | 45.0 | 90 | 0.6268 | 0.5645 | 0.6269 | 0.7918 |
| No log | 46.0 | 92 | 0.6114 | 0.5670 | 0.6115 | 0.7820 |
| No log | 47.0 | 94 | 0.6167 | 0.5622 | 0.6167 | 0.7853 |
| No log | 48.0 | 96 | 0.6254 | 0.5472 | 0.6253 | 0.7908 |
| No log | 49.0 | 98 | 0.6108 | 0.5626 | 0.6109 | 0.7816 |
| No log | 50.0 | 100 | 0.6008 | 0.5650 | 0.6008 | 0.7751 |
| No log | 51.0 | 102 | 0.6337 | 0.5389 | 0.6336 | 0.7960 |
| No log | 52.0 | 104 | 0.6448 | 0.5514 | 0.6447 | 0.8029 |
| No log | 53.0 | 106 | 0.5994 | 0.5946 | 0.5994 | 0.7742 |
| No log | 54.0 | 108 | 0.6038 | 0.5905 | 0.6039 | 0.7771 |
| No log | 55.0 | 110 | 0.6945 | 0.5204 | 0.6945 | 0.8333 |
| No log | 56.0 | 112 | 0.7058 | 0.4943 | 0.7057 | 0.8400 |
| No log | 57.0 | 114 | 0.5988 | 0.5833 | 0.5988 | 0.7738 |
| No log | 58.0 | 116 | 0.5884 | 0.5834 | 0.5883 | 0.7670 |
| No log | 59.0 | 118 | 0.6311 | 0.5273 | 0.6309 | 0.7943 |
| No log | 60.0 | 120 | 0.6371 | 0.5198 | 0.6369 | 0.7980 |
| No log | 61.0 | 122 | 0.5920 | 0.5885 | 0.5919 | 0.7693 |
| No log | 62.0 | 124 | 0.6019 | 0.5748 | 0.6019 | 0.7758 |
| No log | 63.0 | 126 | 0.6976 | 0.5088 | 0.6975 | 0.8352 |
| No log | 64.0 | 128 | 0.7296 | 0.4961 | 0.7296 | 0.8542 |
| No log | 65.0 | 130 | 0.6359 | 0.5286 | 0.6359 | 0.7975 |
| No log | 66.0 | 132 | 0.6116 | 0.5747 | 0.6116 | 0.7820 |
| No log | 67.0 | 134 | 0.6416 | 0.5479 | 0.6415 | 0.8009 |
| No log | 68.0 | 136 | 0.6348 | 0.5435 | 0.6347 | 0.7967 |
| No log | 69.0 | 138 | 0.5840 | 0.5858 | 0.5840 | 0.7642 |
| No log | 70.0 | 140 | 0.5895 | 0.6019 | 0.5895 | 0.7678 |
| No log | 71.0 | 142 | 0.6354 | 0.5607 | 0.6355 | 0.7972 |
| No log | 72.0 | 144 | 0.6269 | 0.5576 | 0.6270 | 0.7918 |
| No log | 73.0 | 146 | 0.6232 | 0.5799 | 0.6232 | 0.7894 |
| No log | 74.0 | 148 | 0.6143 | 0.5848 | 0.6143 | 0.7838 |
| No log | 75.0 | 150 | 0.6393 | 0.5529 | 0.6393 | 0.7996 |
| No log | 76.0 | 152 | 0.6538 | 0.5430 | 0.6538 | 0.8086 |
| No log | 77.0 | 154 | 0.6299 | 0.5415 | 0.6300 | 0.7937 |
| No log | 78.0 | 156 | 0.6236 | 0.5523 | 0.6237 | 0.7897 |
| No log | 79.0 | 158 | 0.6045 | 0.5679 | 0.6045 | 0.7775 |
| No log | 80.0 | 160 | 0.6007 | 0.5609 | 0.6007 | 0.7750 |
| No log | 81.0 | 162 | 0.6132 | 0.5525 | 0.6132 | 0.7831 |
| No log | 82.0 | 164 | 0.6080 | 0.5509 | 0.6080 | 0.7797 |
| No log | 83.0 | 166 | 0.6131 | 0.5479 | 0.6131 | 0.7830 |
| No log | 84.0 | 168 | 0.6203 | 0.5326 | 0.6203 | 0.7876 |
| No log | 85.0 | 170 | 0.6104 | 0.5580 | 0.6103 | 0.7812 |
| No log | 86.0 | 172 | 0.6090 | 0.5594 | 0.6089 | 0.7803 |
| No log | 87.0 | 174 | 0.6031 | 0.5576 | 0.6031 | 0.7766 |
| No log | 88.0 | 176 | 0.6137 | 0.5431 | 0.6137 | 0.7834 |
| No log | 89.0 | 178 | 0.6337 | 0.5338 | 0.6336 | 0.7960 |
| No log | 90.0 | 180 | 0.6273 | 0.5360 | 0.6273 | 0.7920 |
| No log | 91.0 | 182 | 0.6273 | 0.5313 | 0.6273 | 0.7920 |
| No log | 92.0 | 184 | 0.6380 | 0.5387 | 0.6380 | 0.7987 |
| No log | 93.0 | 186 | 0.6320 | 0.5332 | 0.6320 | 0.7950 |
| No log | 94.0 | 188 | 0.6233 | 0.5321 | 0.6232 | 0.7895 |
| No log | 95.0 | 190 | 0.6243 | 0.5397 | 0.6242 | 0.7901 |
| No log | 96.0 | 192 | 0.6304 | 0.5398 | 0.6303 | 0.7939 |
| No log | 97.0 | 194 | 0.6348 | 0.5362 | 0.6347 | 0.7967 |
| No log | 98.0 | 196 | 0.6344 | 0.5413 | 0.6343 | 0.7964 |
| No log | 99.0 | 198 | 0.6330 | 0.5390 | 0.6329 | 0.7956 |
| No log | 100.0 | 200 | 0.6316 | 0.5390 | 0.6315 | 0.7947 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
GT1999/sequential_batches_mwp_sft_llama3.21b
|
GT1999
| 2025-04-26T17:27:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-19T14:53:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
markury/inkportraits-st
|
markury
| 2025-04-26T17:20:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"flux",
"flux-diffusers",
"text-to-image",
"image-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"lycoris",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-26T15:01:54Z |
---
license: other
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- flux
- flux-diffusers
- text-to-image
- image-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- lycoris
pipeline_tag: text-to-image
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'a victorian portrait of a man, depicted in an intricate engraved style within an ornate oval frame on a sepia-toned background'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
---
# inkportraits-st
This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
The main validation prompt used during training was:
```
a victorian portrait of a man, depicted in an intricate engraved style within an ornate oval frame on a sepia-toned background
```
## Validation settings
- CFG: `3.0`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `FlowMatchEulerDiscreteScheduler`
- Seed: `42`
- Resolution: `832x1216`
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 133
- Training steps: 2400
- Learning rate: 9e-05
- Learning rate schedule: polynomial
- Warmup steps: 100
- Max grad value: 2.0
- Effective batch size: 2
- Micro-batch size: 1
- Gradient accumulation steps: 2
- Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow_matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0'])
- Optimizer: optimi-lion
- Trainable parameter precision: Pure BF16
- Base model precision: `int8-quanto`
- Caption dropout probability: 0.0%
### LyCORIS Config:
```json
{
"algo": "lokr",
"multiplier": 1.0,
"linear_dim": 10000,
"linear_alpha": 1,
"factor": 16,
"apply_preset": {
"target_module": [
"Attention",
"FeedForward"
],
"module_algo_map": {
"Attention": {
"factor": 16
},
"FeedForward": {
"factor": 16
}
}
}
}
```
## Datasets
### victorian-crop-512
- Repeats: 0
- Total number of images: 18
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
- Used for regularisation data: No
### victorian-crop-768
- Repeats: 0
- Total number of images: 18
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
- Used for regularisation data: No
## Inference
```python
import torch
from diffusers import DiffusionPipeline
from lycoris import create_lycoris_from_weights
def download_adapter(repo_id: str):
import os
from huggingface_hub import hf_hub_download
adapter_filename = "pytorch_lora_weights.safetensors"
cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models'))
cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_")
path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path)
path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename)
os.makedirs(path_to_adapter, exist_ok=True)
hf_hub_download(
repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter
)
return path_to_adapter_file
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_repo_id = 'markury/inkportraits-st'
adapter_filename = 'pytorch_lora_weights.safetensors'
adapter_file_path = download_adapter(repo_id=adapter_repo_id)
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer)
wrapper.merge_to()
prompt = "a victorian portrait of a man, depicted in an intricate engraved style within an ornate oval frame on a sepia-toned background"
## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
model_output = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=832,
height=1216,
guidance_scale=3.0,
).images[0]
model_output.save("output.png", format="PNG")
```
## Exponential Moving Average (EMA)
SimpleTuner generates a safetensors variant of the EMA weights and a pt file.
The safetensors file is intended to be used for inference, and the pt file is for continuing finetuning.
The EMA model may provide a more well-rounded result, but typically will feel undertrained compared to the full model as it is a running decayed average of the model weights.
|
Ennthen/hyp1-g29b
|
Ennthen
| 2025-04-26T17:19:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T17:18:49Z |
---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ennthen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dilarayavuz/md-benign-imdb-part-3-bert-base-uncased
|
dilarayavuz
| 2025-04-26T17:15:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T17:12:57Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.301895409822464
f1: 0.8768656716417911
precision: 0.8362989323843416
recall: 0.9215686274509803
auc: 0.9514160664265706
accuracy: 0.868
|
FlareRebellion/DarkHazard-v1.1-24b
|
FlareRebellion
| 2025-04-26T17:12:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B",
"base_model:merge:ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B",
"base_model:Yoesph/Haphazard-v1.1-24b",
"base_model:merge:Yoesph/Haphazard-v1.1-24b",
"base_model:aixonlab/Eurydice-24b-v2",
"base_model:merge:aixonlab/Eurydice-24b-v2",
"base_model:arcee-ai/Arcee-Blitz",
"base_model:merge:arcee-ai/Arcee-Blitz",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T14:53:42Z |
---
base_model:
- arcee-ai/Arcee-Blitz
- ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B
- Yoesph/Haphazard-v1.1-24b
- aixonlab/Eurydice-24b-v2
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
library_name: transformers
tags:
- mergekit
- merge
---
# DarkHazard-v1.1-24b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Inspiration
This merge was inspired by Yoesph/Haphazard-v1.1-24b
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [arcee-ai/Arcee-Blitz](https://huggingface.co/arcee-ai/Arcee-Blitz) as a base.
### Models Merged
The following models were included in the merge:
* [ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B](https://huggingface.co/ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B)
* [Yoesph/Haphazard-v1.1-24b](https://huggingface.co/Yoesph/Haphazard-v1.1-24b)
* [aixonlab/Eurydice-24b-v2](https://huggingface.co/aixonlab/Eurydice-24b-v2)
* [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: arcee-ai/Arcee-Blitz
merge_method: model_stock
dtype: bfloat16
models:
- model: aixonlab/Eurydice-24b-v2 # storytelling / RP
- model: Yoesph/Haphazard-v1.1-24b # Haphazard goodness + Cydonia
- model: ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B # uncensor + Cydonia
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b # Prompt Adherence
```
|
xkaska02/czert_lr2e-05_bs4_train287_max_len128
|
xkaska02
| 2025-04-26T17:04:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:UWB-AIR/Czert-B-base-cased",
"base_model:finetune:UWB-AIR/Czert-B-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-04-26T17:03:19Z |
---
library_name: transformers
base_model: UWB-AIR/Czert-B-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: czert_lr2e-05_bs4_train287_max_len128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# czert_lr2e-05_bs4_train287_max_len128
This model is a fine-tuned version of [UWB-AIR/Czert-B-base-cased](https://huggingface.co/UWB-AIR/Czert-B-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1256
- Precision: 0.9351
- Recall: 0.9481
- F1: 0.9415
- Accuracy: 0.9662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 72 | 0.2109 | 0.8758 | 0.8932 | 0.8844 | 0.9359 |
| No log | 2.0 | 144 | 0.1474 | 0.9116 | 0.9205 | 0.9160 | 0.9537 |
| No log | 3.0 | 216 | 0.1336 | 0.9455 | 0.9267 | 0.9360 | 0.9632 |
| No log | 4.0 | 288 | 0.1279 | 0.936 | 0.9307 | 0.9333 | 0.9620 |
| No log | 5.0 | 360 | 0.1125 | 0.9422 | 0.9443 | 0.9432 | 0.9675 |
| No log | 6.0 | 432 | 0.1321 | 0.9409 | 0.9403 | 0.9406 | 0.9662 |
| 0.1505 | 7.0 | 504 | 0.1333 | 0.9455 | 0.9460 | 0.9458 | 0.9690 |
| 0.1505 | 8.0 | 576 | 0.1335 | 0.9478 | 0.9483 | 0.9480 | 0.9695 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.3.1
- Tokenizers 0.20.0
|
beslam55/es
|
beslam55
| 2025-04-26T17:03:39Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2025-04-26T17:03:38Z |
---
license: artistic-2.0
---
|
TOMFORD79/Menu_v1_5
|
TOMFORD79
| 2025-04-26T17:00:41Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-04-26T16:25:32Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
memevis/supp21
|
memevis
| 2025-04-26T16:50:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T16:50:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q5_K_M-GGUF
|
Triangle104
| 2025-04-26T16:47:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B",
"base_model:quantized:DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T16:44:47Z |
---
base_model: DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B`](https://huggingface.co/DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B) for more details on the model.
---
This model was converted to Nvidia's new "UltraLong8B" long context Llama 3.1 model structure (https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct) which allowed full transfer of "Dark Planet 8B" in all it's "glory" so to speak. Due to Nvidia's structure, the new Dark Planet has attained far greater long generation not only in terms of context, but also coherence too. There is a also a bump in overall performance as well.
This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5.
It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct).
It is for any writing, fiction or roleplay activity.
It requires Llama 3 template and/or "Command-R" template.
Suggest a context window of at least 8k, 16K is better... as this model will generate long outputs unless you set a hard limit.
Likewise, as this is an instruct model - the more instructions in your prompt and/or system prompt - the greater the output quality.
IE: Less "guessing" equals far higher quality.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q5_K_M-GGUF --hf-file llama-3.1-1million-ctx-dark-planet-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q5_K_M-GGUF --hf-file llama-3.1-1million-ctx-dark-planet-8b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q5_K_M-GGUF --hf-file llama-3.1-1million-ctx-dark-planet-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q5_K_M-GGUF --hf-file llama-3.1-1million-ctx-dark-planet-8b-q5_k_m.gguf -c 2048
```
|
labhanshai/Kiara
|
labhanshai
| 2025-04-26T16:45:45Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T16:45:45Z |
---
license: apache-2.0
---
|
Sophie-Rain-Leak/VIRAL-Sophie-Rain-SpiderMan-Viral-VIDEO
|
Sophie-Rain-Leak
| 2025-04-26T16:42:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-26T16:41:58Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
yujiepan/glm-4-tiny-random
|
yujiepan
| 2025-04-26T16:37:22Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"glm4",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-08T17:10:12Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [THUDM/GLM-4-32B-0414](https://huggingface.co/THUDM/GLM-4-32B-0414).
### Example usage:
```python
from transformers import pipeline
model_id = "yujiepan/glm-4-tiny-random"
pipe = pipeline(
"text-generation", model=model_id, device="cuda",
trust_remote_code=True, max_new_tokens=20,
)
print(pipe("Hello World!"))
```
### Codes to create this repo:
```python
import torch
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
GenerationConfig,
pipeline,
set_seed,
)
source_model_id = "THUDM/GLM-4-32B-0414"
save_folder = "/tmp/yujiepan/glm-4-tiny-random"
tokenizer = AutoTokenizer.from_pretrained(
source_model_id, trust_remote_code=True,
)
tokenizer.save_pretrained(save_folder)
config = AutoConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
config.hidden_size = 16
config.head_dim = 16
config.intermediate_size = 32
config.num_attention_heads = 1
config.num_hidden_layers = 2
config.num_key_value_heads = 1
config.tie_word_embeddings = False
model = AutoModelForCausalLM.from_config(
config,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.5)
print(name, p.shape)
model.save_pretrained(save_folder)
```
|
Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q4_K_M-GGUF
|
Triangle104
| 2025-04-26T16:35:46Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B",
"base_model:quantized:DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T16:33:16Z |
---
base_model: DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B`](https://huggingface.co/DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1million-ctx-Dark-Planet-8B) for more details on the model.
---
This model was converted to Nvidia's new "UltraLong8B" long context Llama 3.1 model structure (https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct) which allowed full transfer of "Dark Planet 8B" in all it's "glory" so to speak. Due to Nvidia's structure, the new Dark Planet has attained far greater long generation not only in terms of context, but also coherence too. There is a also a bump in overall performance as well.
This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5.
It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct).
It is for any writing, fiction or roleplay activity.
It requires Llama 3 template and/or "Command-R" template.
Suggest a context window of at least 8k, 16K is better... as this model will generate long outputs unless you set a hard limit.
Likewise, as this is an instruct model - the more instructions in your prompt and/or system prompt - the greater the output quality.
IE: Less "guessing" equals far higher quality.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q4_K_M-GGUF --hf-file llama-3.1-1million-ctx-dark-planet-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q4_K_M-GGUF --hf-file llama-3.1-1million-ctx-dark-planet-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q4_K_M-GGUF --hf-file llama-3.1-1million-ctx-dark-planet-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-1million-ctx-Dark-Planet-8B-Q4_K_M-GGUF --hf-file llama-3.1-1million-ctx-dark-planet-8b-q4_k_m.gguf -c 2048
```
|
HERE-Sophie-Rain-Spiderman-Leak-Video/Sophie.Rain.Spider.Man.Leaks.Video.Sophie.Rain.Spiderman.Video.Tutorial.Link
|
HERE-Sophie-Rain-Spiderman-Leak-Video
| 2025-04-26T16:23:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-26T16:22:25Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
ViRAL-Sophie-Rain-Spiderman-Videox-Free/Sophie.Rain.Sophie.Rain.Spiderman.Video.Official
|
ViRAL-Sophie-Rain-Spiderman-Videox-Free
| 2025-04-26T16:20:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-26T16:19:36Z |
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p>
|
Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF
|
Triangle104
| 2025-04-26T16:19:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:allura-org/Gemma-3-Glitter-27B",
"base_model:quantized:allura-org/Gemma-3-Glitter-27B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T16:15:31Z |
---
base_model: allura-org/Gemma-3-Glitter-27B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF
This model was converted to GGUF format from [`allura-org/Gemma-3-Glitter-27B`](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) for more details on the model.
---
A creative writing model based on Gemma 3 27B.
Columbidae/gemma-3-27b-half, a 50/50 merge of 27B IT and 27B PT, was used as the base model. (This was done because of the success of Starshine, a 50/50 IT and PT merge.)
The inclusion of PT model does weaken the instruct, but it also weakens the censorship/hesitancy to participate in certain fictional stories. The prose also becomes more natural with less of the IT model included.
This model does better with short and to-the-point prompts. Long, detailed system prompts will often confuse it. (Tested with 1000-2000 token system prompts to lackluster results compared to 100-500 token prompts).
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF --hf-file gemma-3-glitter-27b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF --hf-file gemma-3-glitter-27b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF --hf-file gemma-3-glitter-27b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma-3-Glitter-27B-Q8_0-GGUF --hf-file gemma-3-glitter-27b-q8_0.gguf -c 2048
```
|
genki10/BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold3
|
genki10
| 2025-04-26T16:17:15Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T16:03:08Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5564
- Qwk: 0.5636
- Mse: 0.5564
- Rmse: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 2 | 11.9063 | -0.0279 | 11.9038 | 3.4502 |
| No log | 2.0 | 4 | 9.2561 | 0.0 | 9.2545 | 3.0421 |
| No log | 3.0 | 6 | 7.3906 | 0.0 | 7.3892 | 2.7183 |
| No log | 4.0 | 8 | 6.1470 | 0.0104 | 6.1455 | 2.4790 |
| No log | 5.0 | 10 | 4.7307 | 0.0114 | 4.7296 | 2.1748 |
| No log | 6.0 | 12 | 3.6712 | 0.0038 | 3.6702 | 1.9158 |
| No log | 7.0 | 14 | 2.9987 | 0.0 | 2.9976 | 1.7313 |
| No log | 8.0 | 16 | 2.2411 | 0.1175 | 2.2402 | 1.4967 |
| No log | 9.0 | 18 | 1.7994 | 0.0365 | 1.7986 | 1.3411 |
| No log | 10.0 | 20 | 1.5119 | 0.0302 | 1.5112 | 1.2293 |
| No log | 11.0 | 22 | 1.1775 | 0.0302 | 1.1769 | 1.0849 |
| No log | 12.0 | 24 | 1.0014 | 0.0202 | 1.0008 | 1.0004 |
| No log | 13.0 | 26 | 0.8976 | 0.3195 | 0.8970 | 0.9471 |
| No log | 14.0 | 28 | 0.8228 | 0.3673 | 0.8222 | 0.9068 |
| No log | 15.0 | 30 | 0.8523 | 0.2800 | 0.8517 | 0.9229 |
| No log | 16.0 | 32 | 0.7316 | 0.2326 | 0.7313 | 0.8551 |
| No log | 17.0 | 34 | 0.6307 | 0.3785 | 0.6306 | 0.7941 |
| No log | 18.0 | 36 | 0.6821 | 0.4362 | 0.6819 | 0.8258 |
| No log | 19.0 | 38 | 0.5622 | 0.4421 | 0.5621 | 0.7497 |
| No log | 20.0 | 40 | 0.5782 | 0.4760 | 0.5781 | 0.7603 |
| No log | 21.0 | 42 | 0.6375 | 0.5111 | 0.6373 | 0.7983 |
| No log | 22.0 | 44 | 0.5141 | 0.5263 | 0.5140 | 0.7169 |
| No log | 23.0 | 46 | 0.7375 | 0.4655 | 0.7373 | 0.8587 |
| No log | 24.0 | 48 | 0.5200 | 0.5555 | 0.5201 | 0.7212 |
| No log | 25.0 | 50 | 0.5088 | 0.6219 | 0.5088 | 0.7133 |
| No log | 26.0 | 52 | 0.7115 | 0.5143 | 0.7114 | 0.8435 |
| No log | 27.0 | 54 | 0.7079 | 0.4985 | 0.7077 | 0.8412 |
| No log | 28.0 | 56 | 0.5335 | 0.5843 | 0.5335 | 0.7304 |
| No log | 29.0 | 58 | 0.6170 | 0.5431 | 0.6170 | 0.7855 |
| No log | 30.0 | 60 | 0.5067 | 0.5731 | 0.5068 | 0.7119 |
| No log | 31.0 | 62 | 0.6167 | 0.5527 | 0.6169 | 0.7854 |
| No log | 32.0 | 64 | 0.5177 | 0.5983 | 0.5178 | 0.7196 |
| No log | 33.0 | 66 | 0.5050 | 0.6377 | 0.5049 | 0.7106 |
| No log | 34.0 | 68 | 0.5707 | 0.5896 | 0.5706 | 0.7554 |
| No log | 35.0 | 70 | 0.6511 | 0.5396 | 0.6510 | 0.8068 |
| No log | 36.0 | 72 | 0.5217 | 0.5770 | 0.5215 | 0.7222 |
| No log | 37.0 | 74 | 0.5531 | 0.5585 | 0.5529 | 0.7436 |
| No log | 38.0 | 76 | 0.6864 | 0.4928 | 0.6862 | 0.8284 |
| No log | 39.0 | 78 | 0.6373 | 0.5037 | 0.6372 | 0.7982 |
| No log | 40.0 | 80 | 0.5506 | 0.5552 | 0.5506 | 0.7420 |
| No log | 41.0 | 82 | 0.5623 | 0.5400 | 0.5622 | 0.7498 |
| No log | 42.0 | 84 | 0.6502 | 0.5007 | 0.6500 | 0.8062 |
| No log | 43.0 | 86 | 0.5781 | 0.5547 | 0.5779 | 0.7602 |
| No log | 44.0 | 88 | 0.5708 | 0.5663 | 0.5706 | 0.7554 |
| No log | 45.0 | 90 | 0.6341 | 0.5154 | 0.6339 | 0.7962 |
| No log | 46.0 | 92 | 0.5815 | 0.5502 | 0.5815 | 0.7626 |
| No log | 47.0 | 94 | 0.6164 | 0.5149 | 0.6164 | 0.7851 |
| No log | 48.0 | 96 | 0.5450 | 0.5598 | 0.5450 | 0.7382 |
| No log | 49.0 | 98 | 0.5788 | 0.5238 | 0.5788 | 0.7608 |
| No log | 50.0 | 100 | 0.5876 | 0.5205 | 0.5875 | 0.7665 |
| No log | 51.0 | 102 | 0.5490 | 0.5597 | 0.5489 | 0.7409 |
| No log | 52.0 | 104 | 0.5796 | 0.5422 | 0.5795 | 0.7612 |
| No log | 53.0 | 106 | 0.5875 | 0.5350 | 0.5874 | 0.7664 |
| No log | 54.0 | 108 | 0.5454 | 0.5723 | 0.5453 | 0.7384 |
| No log | 55.0 | 110 | 0.5645 | 0.5452 | 0.5643 | 0.7512 |
| No log | 56.0 | 112 | 0.5550 | 0.5512 | 0.5550 | 0.7450 |
| No log | 57.0 | 114 | 0.5879 | 0.5476 | 0.5878 | 0.7667 |
| No log | 58.0 | 116 | 0.5735 | 0.5610 | 0.5734 | 0.7572 |
| No log | 59.0 | 118 | 0.5411 | 0.5687 | 0.5410 | 0.7356 |
| No log | 60.0 | 120 | 0.5391 | 0.5799 | 0.5391 | 0.7342 |
| No log | 61.0 | 122 | 0.6113 | 0.5802 | 0.6113 | 0.7818 |
| No log | 62.0 | 124 | 0.6268 | 0.5376 | 0.6267 | 0.7916 |
| No log | 63.0 | 126 | 0.5564 | 0.5636 | 0.5564 | 0.7459 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
filipesantoscv11/d2dce406-2631-4fde-94d2-c69957fdb02c
|
filipesantoscv11
| 2025-04-26T16:16:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T16:15:24Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d2dce406-2631-4fde-94d2-c69957fdb02c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 723f999c3d4f537d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/723f999c3d4f537d_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/d2dce406-2631-4fde-94d2-c69957fdb02c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/723f999c3d4f537d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b222ded9-403c-4db5-a5de-962567cdbc68
wandb_project: s56-6
wandb_run: your_name
wandb_runid: b222ded9-403c-4db5-a5de-962567cdbc68
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d2dce406-2631-4fde-94d2-c69957fdb02c
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.8355 | 0.2128 | 200 | 4.8581 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
aleegis/10ceae7c-c0da-4291-bc62-4def26b4d746
|
aleegis
| 2025-04-26T16:14:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T16:09:51Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 10ceae7c-c0da-4291-bc62-4def26b4d746
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 723f999c3d4f537d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/723f999c3d4f537d_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/10ceae7c-c0da-4291-bc62-4def26b4d746
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/723f999c3d4f537d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: b222ded9-403c-4db5-a5de-962567cdbc68
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b222ded9-403c-4db5-a5de-962567cdbc68
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 10ceae7c-c0da-4291-bc62-4def26b4d746
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
aleegis/46f63151-4fd2-42c5-abb0-1ea98fe6268a
|
aleegis
| 2025-04-26T16:14:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T16:10:09Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 46f63151-4fd2-42c5-abb0-1ea98fe6268a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 723f999c3d4f537d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/723f999c3d4f537d_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/46f63151-4fd2-42c5-abb0-1ea98fe6268a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/723f999c3d4f537d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: b222ded9-403c-4db5-a5de-962567cdbc68
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b222ded9-403c-4db5-a5de-962567cdbc68
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 46f63151-4fd2-42c5-abb0-1ea98fe6268a
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
BootesVoid/cm9y8nkcg01bbqeqol8wwsasi_cm9ye0fbg01kcqeqos7zsc8q9
|
BootesVoid
| 2025-04-26T16:13:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-26T16:13:43Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: F25INTK210
---
# Cm9Y8Nkcg01Bbqeqol8Wwsasi_Cm9Ye0Fbg01Kcqeqos7Zsc8Q9
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `F25INTK210` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "F25INTK210",
"lora_weights": "https://huggingface.co/BootesVoid/cm9y8nkcg01bbqeqol8wwsasi_cm9ye0fbg01kcqeqos7zsc8q9/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm9y8nkcg01bbqeqol8wwsasi_cm9ye0fbg01kcqeqos7zsc8q9', weight_name='lora.safetensors')
image = pipeline('F25INTK210').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm9y8nkcg01bbqeqol8wwsasi_cm9ye0fbg01kcqeqos7zsc8q9/discussions) to add images that show off what you’ve made with this LoRA.
|
sergioalves/f46c06a5-5d69-4b79-b55c-9aad986959ad
|
sergioalves
| 2025-04-26T16:10:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T16:09:54Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f46c06a5-5d69-4b79-b55c-9aad986959ad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 723f999c3d4f537d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/723f999c3d4f537d_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/f46c06a5-5d69-4b79-b55c-9aad986959ad
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/723f999c3d4f537d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b222ded9-403c-4db5-a5de-962567cdbc68
wandb_project: s56-8
wandb_run: your_name
wandb_runid: b222ded9-403c-4db5-a5de-962567cdbc68
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f46c06a5-5d69-4b79-b55c-9aad986959ad
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.8408 | 0.2128 | 200 | 4.9744 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Triangle104/Gemma-3-Glitter-27B-Q6_K-GGUF
|
Triangle104
| 2025-04-26T16:09:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:allura-org/Gemma-3-Glitter-27B",
"base_model:quantized:allura-org/Gemma-3-Glitter-27B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T16:06:35Z |
---
base_model: allura-org/Gemma-3-Glitter-27B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Gemma-3-Glitter-27B-Q6_K-GGUF
This model was converted to GGUF format from [`allura-org/Gemma-3-Glitter-27B`](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) for more details on the model.
---
A creative writing model based on Gemma 3 27B.
Columbidae/gemma-3-27b-half, a 50/50 merge of 27B IT and 27B PT, was used as the base model. (This was done because of the success of Starshine, a 50/50 IT and PT merge.)
The inclusion of PT model does weaken the instruct, but it also weakens the censorship/hesitancy to participate in certain fictional stories. The prose also becomes more natural with less of the IT model included.
This model does better with short and to-the-point prompts. Long, detailed system prompts will often confuse it. (Tested with 1000-2000 token system prompts to lackluster results compared to 100-500 token prompts).
--
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma-3-Glitter-27B-Q6_K-GGUF --hf-file gemma-3-glitter-27b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma-3-Glitter-27B-Q6_K-GGUF --hf-file gemma-3-glitter-27b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma-3-Glitter-27B-Q6_K-GGUF --hf-file gemma-3-glitter-27b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma-3-Glitter-27B-Q6_K-GGUF --hf-file gemma-3-glitter-27b-q6_k.gguf -c 2048
```
|
hafsa101010/cat_toy-stable-diffusion-v2
|
hafsa101010
| 2025-04-26T16:06:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-04-26T14:45:57Z |
---
base_model: stabilityai/stable-diffusion-2
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of cat toy
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - hafsa101010/cat_toy-stable-diffusion-v2
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were trained on a photo of cat toy using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Triangle104/Gemma-3-Glitter-27B-Q5_K_M-GGUF
|
Triangle104
| 2025-04-26T16:00:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:allura-org/Gemma-3-Glitter-27B",
"base_model:quantized:allura-org/Gemma-3-Glitter-27B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T15:30:57Z |
---
base_model: allura-org/Gemma-3-Glitter-27B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Gemma-3-Glitter-27B-Q5_K_M-GGUF
This model was converted to GGUF format from [`allura-org/Gemma-3-Glitter-27B`](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) for more details on the model.
---
A creative writing model based on Gemma 3 27B.
Columbidae/gemma-3-27b-half, a 50/50 merge of 27B IT and 27B PT, was used as the base model. (This was done because of the success of Starshine, a 50/50 IT and PT merge.)
The inclusion of PT model does weaken the instruct, but it also weakens the censorship/hesitancy to participate in certain fictional stories. The prose also becomes more natural with less of the IT model included.
This model does better with short and to-the-point prompts. Long, detailed system prompts will often confuse it. (Tested with 1000-2000 token system prompts to lackluster results compared to 100-500 token prompts).
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Gemma-3-Glitter-27B-Q5_K_M-GGUF --hf-file gemma-3-glitter-27b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Gemma-3-Glitter-27B-Q5_K_M-GGUF --hf-file gemma-3-glitter-27b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Gemma-3-Glitter-27B-Q5_K_M-GGUF --hf-file gemma-3-glitter-27b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Gemma-3-Glitter-27B-Q5_K_M-GGUF --hf-file gemma-3-glitter-27b-q5_k_m.gguf -c 2048
```
|
anhkiet5655t/bdg
|
anhkiet5655t
| 2025-04-26T16:00:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-04-26T16:00:43Z |
---
license: creativeml-openrail-m
---
|
Otakadelic/MT1-Gen13-gemma-2-9B-Q8_0-GGUF
|
Otakadelic
| 2025-04-26T15:47:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:zelk12/MT1-Gen13-gemma-2-9B",
"base_model:quantized:zelk12/MT1-Gen13-gemma-2-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T15:46:43Z |
---
base_model: zelk12/MT1-Gen13-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Otakadelic/MT1-Gen13-gemma-2-9B-Q8_0-GGUF
This model was converted to GGUF format from [`zelk12/MT1-Gen13-gemma-2-9B`](https://huggingface.co/zelk12/MT1-Gen13-gemma-2-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT1-Gen13-gemma-2-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Otakadelic/MT1-Gen13-gemma-2-9B-Q8_0-GGUF --hf-file mt1-gen13-gemma-2-9b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Otakadelic/MT1-Gen13-gemma-2-9B-Q8_0-GGUF --hf-file mt1-gen13-gemma-2-9b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Otakadelic/MT1-Gen13-gemma-2-9B-Q8_0-GGUF --hf-file mt1-gen13-gemma-2-9b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Otakadelic/MT1-Gen13-gemma-2-9B-Q8_0-GGUF --hf-file mt1-gen13-gemma-2-9b-q8_0.gguf -c 2048
```
|
genki10/BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold1
|
genki10
| 2025-04-26T15:40:21Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T15:25:41Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5107
- Qwk: 0.6201
- Mse: 0.5101
- Rmse: 0.7142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 1.0 | 2 | 7.6515 | 0.0 | 7.6492 | 2.7657 |
| No log | 2.0 | 4 | 7.2604 | 0.0 | 7.2581 | 2.6941 |
| No log | 3.0 | 6 | 6.6180 | 0.0 | 6.6158 | 2.5721 |
| No log | 4.0 | 8 | 5.3839 | -0.0131 | 5.3819 | 2.3199 |
| No log | 5.0 | 10 | 4.1816 | 0.0 | 4.1795 | 2.0444 |
| No log | 6.0 | 12 | 3.3117 | 0.0 | 3.3097 | 1.8193 |
| No log | 7.0 | 14 | 2.5437 | 0.0 | 2.5420 | 1.5944 |
| No log | 8.0 | 16 | 1.9512 | 0.0645 | 1.9495 | 1.3963 |
| No log | 9.0 | 18 | 1.5546 | 0.0211 | 1.5531 | 1.2462 |
| No log | 10.0 | 20 | 1.2505 | 0.0 | 1.2490 | 1.1176 |
| No log | 11.0 | 22 | 1.0495 | 0.0 | 1.0481 | 1.0238 |
| No log | 12.0 | 24 | 0.9373 | 0.0 | 0.9360 | 0.9675 |
| No log | 13.0 | 26 | 0.8444 | 0.3069 | 0.8432 | 0.9183 |
| No log | 14.0 | 28 | 0.7818 | 0.2446 | 0.7807 | 0.8836 |
| No log | 15.0 | 30 | 0.6906 | 0.3398 | 0.6897 | 0.8305 |
| No log | 16.0 | 32 | 0.8725 | 0.2255 | 0.8714 | 0.9335 |
| No log | 17.0 | 34 | 0.7393 | 0.3427 | 0.7383 | 0.8593 |
| No log | 18.0 | 36 | 0.6456 | 0.4903 | 0.6447 | 0.8030 |
| No log | 19.0 | 38 | 0.5972 | 0.5306 | 0.5964 | 0.7723 |
| No log | 20.0 | 40 | 0.6241 | 0.3967 | 0.6233 | 0.7895 |
| No log | 21.0 | 42 | 0.6082 | 0.4253 | 0.6074 | 0.7794 |
| No log | 22.0 | 44 | 0.5812 | 0.5626 | 0.5804 | 0.7618 |
| No log | 23.0 | 46 | 0.6714 | 0.5680 | 0.6706 | 0.8189 |
| No log | 24.0 | 48 | 0.4739 | 0.5490 | 0.4731 | 0.6878 |
| No log | 25.0 | 50 | 0.5180 | 0.5103 | 0.5172 | 0.7192 |
| No log | 26.0 | 52 | 0.5498 | 0.5795 | 0.5489 | 0.7409 |
| No log | 27.0 | 54 | 0.5799 | 0.5851 | 0.5790 | 0.7609 |
| No log | 28.0 | 56 | 0.4841 | 0.5400 | 0.4832 | 0.6951 |
| No log | 29.0 | 58 | 0.5837 | 0.4843 | 0.5829 | 0.7635 |
| No log | 30.0 | 60 | 0.5404 | 0.5148 | 0.5396 | 0.7346 |
| No log | 31.0 | 62 | 0.4938 | 0.5625 | 0.4930 | 0.7021 |
| No log | 32.0 | 64 | 0.5099 | 0.6038 | 0.5093 | 0.7136 |
| No log | 33.0 | 66 | 0.6422 | 0.5813 | 0.6416 | 0.8010 |
| No log | 34.0 | 68 | 0.4865 | 0.6511 | 0.4860 | 0.6972 |
| No log | 35.0 | 70 | 0.4862 | 0.6741 | 0.4857 | 0.6970 |
| No log | 36.0 | 72 | 0.4878 | 0.6634 | 0.4874 | 0.6981 |
| No log | 37.0 | 74 | 0.4980 | 0.6667 | 0.4975 | 0.7053 |
| No log | 38.0 | 76 | 0.4810 | 0.6569 | 0.4805 | 0.6932 |
| No log | 39.0 | 78 | 0.5480 | 0.5754 | 0.5472 | 0.7397 |
| No log | 40.0 | 80 | 0.5823 | 0.5561 | 0.5815 | 0.7626 |
| No log | 41.0 | 82 | 0.5469 | 0.5734 | 0.5461 | 0.7390 |
| No log | 42.0 | 84 | 0.4820 | 0.6131 | 0.4812 | 0.6937 |
| No log | 43.0 | 86 | 0.4891 | 0.6231 | 0.4885 | 0.6989 |
| No log | 44.0 | 88 | 0.5023 | 0.6123 | 0.5016 | 0.7083 |
| No log | 45.0 | 90 | 0.5295 | 0.6258 | 0.5288 | 0.7272 |
| No log | 46.0 | 92 | 0.5997 | 0.5894 | 0.5991 | 0.7740 |
| No log | 47.0 | 94 | 0.5581 | 0.5967 | 0.5575 | 0.7466 |
| No log | 48.0 | 96 | 0.5917 | 0.5706 | 0.5909 | 0.7687 |
| No log | 49.0 | 98 | 0.5934 | 0.5756 | 0.5927 | 0.7698 |
| No log | 50.0 | 100 | 0.5316 | 0.6088 | 0.5310 | 0.7287 |
| No log | 51.0 | 102 | 0.5498 | 0.5986 | 0.5491 | 0.7410 |
| No log | 52.0 | 104 | 0.5961 | 0.5850 | 0.5953 | 0.7715 |
| No log | 53.0 | 106 | 0.6112 | 0.5802 | 0.6103 | 0.7812 |
| No log | 54.0 | 108 | 0.5362 | 0.6060 | 0.5355 | 0.7318 |
| No log | 55.0 | 110 | 0.5969 | 0.5910 | 0.5963 | 0.7722 |
| No log | 56.0 | 112 | 0.5527 | 0.6012 | 0.5520 | 0.7430 |
| No log | 57.0 | 114 | 0.5307 | 0.5982 | 0.5299 | 0.7280 |
| No log | 58.0 | 116 | 0.5171 | 0.5907 | 0.5164 | 0.7186 |
| No log | 59.0 | 118 | 0.5004 | 0.6131 | 0.4998 | 0.7069 |
| No log | 60.0 | 120 | 0.5098 | 0.5932 | 0.5092 | 0.7136 |
| No log | 61.0 | 122 | 0.4910 | 0.6149 | 0.4903 | 0.7002 |
| No log | 62.0 | 124 | 0.5223 | 0.6135 | 0.5215 | 0.7222 |
| No log | 63.0 | 126 | 0.4927 | 0.6308 | 0.4920 | 0.7014 |
| No log | 64.0 | 128 | 0.5097 | 0.6205 | 0.5091 | 0.7135 |
| No log | 65.0 | 130 | 0.5107 | 0.6201 | 0.5101 | 0.7142 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
DanielNRU/pollen-ner-cycle-300
|
DanielNRU
| 2025-04-26T15:21:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"base_model:adapter:DeepPavlov/bert-base-bg-cs-pl-ru-cased",
"region:us"
] | null | 2025-04-26T03:18:16Z |
---
library_name: peft
base_model: DeepPavlov/bert-base-bg-cs-pl-ru-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: pollen-ner-cycle-300
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pollen-ner-cycle-300
This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0346
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|
| No log | 1.0 | 38 | 1.1272 | 0.0 | 0.0 | 0.0 |
| 1.5618 | 2.0 | 76 | 1.0511 | 0.0 | 0.0 | 0.0 |
| 1.095 | 3.0 | 114 | 1.0420 | 0.0 | 0.0 | 0.0 |
| 1.0985 | 4.0 | 152 | 1.0366 | 0.0 | 0.0 | 0.0 |
| 1.0985 | 5.0 | 190 | 1.0346 | 0.0 | 0.0 | 0.0 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
vermoney/e62e4f47-43fb-4278-a6fe-958b20291be9
|
vermoney
| 2025-04-26T15:11:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T15:03:44Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e62e4f47-43fb-4278-a6fe-958b20291be9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 678109d9bdb718ed_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/678109d9bdb718ed_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/e62e4f47-43fb-4278-a6fe-958b20291be9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/678109d9bdb718ed_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 39f481fc-56ef-49a8-b4cf-f573a51ee02d
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 39f481fc-56ef-49a8-b4cf-f573a51ee02d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e62e4f47-43fb-4278-a6fe-958b20291be9
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8776 | 0.0223 | 200 | 2.0197 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ZjWRq19q9EC1/fshshd
|
ZjWRq19q9EC1
| 2025-04-26T12:05:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T12:05:57Z |
---
license: apache-2.0
---
|
RzZ/Qwen2.5-VL-3B-GGUF
|
RzZ
| 2025-04-26T11:59:49Z | 710 | 0 | null |
[
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-15T17:16:01Z |
---
license: mit
---
GGUF file for quick testing of WIP implmentation of llama.cpp Qwen2.5 VL.
You can find the lastest version of implmentation [here](https://github.com/HimariO/llama.cpp.qwen2vl/tree/qwen25-vl). (Don't forget to switch to `qwen25-vl` branch)
You can also follow the llama.cpp draft PR [here](https://github.com/ggml-org/llama.cpp/pull/12402)
|
tcapelle/grpo-qwen7b-triton-5ep
|
tcapelle
| 2025-04-26T11:59:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T11:58:00Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: workspace/data/axolotl-artifacts/grpo-beta-zero
tags:
- generated_from_trainer
licence: license
---
# Model Card for workspace/data/axolotl-artifacts/grpo-beta-zero
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/grpo-cuda/axolotl-grpo/runs/asgasvq2)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kaylapercy/kaylapercy
|
kaylapercy
| 2025-04-26T11:44:18Z | 0 | 0 | null |
[
"license:bsd-3-clause-clear",
"region:us"
] | null | 2025-04-26T11:44:18Z |
---
license: bsd-3-clause-clear
---
|
Silin1590/Qwen-0d5B-Int-CoT
|
Silin1590
| 2025-04-26T11:05:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T11:05:19Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
hasdal/71ed1701-8cd9-4105-846c-7023aded0cb7
|
hasdal
| 2025-04-26T11:01:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T09:28:14Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 71ed1701-8cd9-4105-846c-7023aded0cb7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codellama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fd2d316e66a34327_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fd2d316e66a34327_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: hasdal/71ed1701-8cd9-4105-846c-7023aded0cb7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00022
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/fd2d316e66a34327_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 30
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b834cdfc-e127-496a-a2e6-427ed26236a6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b834cdfc-e127-496a-a2e6-427ed26236a6
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 71ed1701-8cd9-4105-846c-7023aded0cb7
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00022
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0019 | 1 | nan |
| 0.0 | 0.9560 | 500 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Silin1590/Qwen-7B-Int-Soc-CoA
|
Silin1590
| 2025-04-26T10:59:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T10:56:41Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-7B
tags:
- chat
library_name: transformers
---
# Qwen2.5-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 7B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
genki10/BERT_V8_sp10_lw40_ex50_lo100_k1_k1_fold2
|
genki10
| 2025-04-26T10:56:06Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T10:41:28Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex50_lo100_k1_k1_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex50_lo100_k1_k1_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6126
- Qwk: 0.5339
- Mse: 0.6123
- Rmse: 0.7825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 1.0 | 1 | 14.3248 | 0.0 | 14.3247 | 3.7848 |
| No log | 2.0 | 2 | 12.4278 | 0.0 | 12.4278 | 3.5253 |
| No log | 3.0 | 3 | 10.9240 | 0.0240 | 10.9242 | 3.3052 |
| No log | 4.0 | 4 | 9.7028 | 0.0012 | 9.7030 | 3.1150 |
| No log | 5.0 | 5 | 8.8166 | 0.0 | 8.8167 | 2.9693 |
| No log | 6.0 | 6 | 7.9481 | 0.0 | 7.9482 | 2.8193 |
| No log | 7.0 | 7 | 6.7717 | 0.0 | 6.7719 | 2.6023 |
| No log | 8.0 | 8 | 5.7869 | 0.0381 | 5.7871 | 2.4056 |
| No log | 9.0 | 9 | 5.1713 | 0.0400 | 5.1717 | 2.2741 |
| No log | 10.0 | 10 | 5.3130 | 0.0356 | 5.3133 | 2.3051 |
| No log | 11.0 | 11 | 4.7972 | 0.0270 | 4.7976 | 2.1903 |
| No log | 12.0 | 12 | 3.6411 | 0.0078 | 3.6416 | 1.9083 |
| No log | 13.0 | 13 | 3.1161 | 0.0039 | 3.1165 | 1.7654 |
| No log | 14.0 | 14 | 2.8174 | 0.0 | 2.8179 | 1.6787 |
| No log | 15.0 | 15 | 2.4930 | 0.0250 | 2.4936 | 1.5791 |
| No log | 16.0 | 16 | 2.2413 | 0.1391 | 2.2418 | 1.4973 |
| No log | 17.0 | 17 | 2.1104 | 0.1419 | 2.1109 | 1.4529 |
| No log | 18.0 | 18 | 1.7575 | 0.0834 | 1.7580 | 1.3259 |
| No log | 19.0 | 19 | 1.5609 | 0.0539 | 1.5614 | 1.2496 |
| No log | 20.0 | 20 | 1.3621 | 0.0475 | 1.3626 | 1.1673 |
| No log | 21.0 | 21 | 1.3271 | 0.0475 | 1.3276 | 1.1522 |
| No log | 22.0 | 22 | 1.1275 | 0.0280 | 1.1280 | 1.0621 |
| No log | 23.0 | 23 | 0.9953 | 0.0213 | 0.9957 | 0.9978 |
| No log | 24.0 | 24 | 0.9249 | 0.0496 | 0.9253 | 0.9619 |
| No log | 25.0 | 25 | 0.8935 | 0.1400 | 0.8939 | 0.9455 |
| No log | 26.0 | 26 | 0.7872 | 0.3523 | 0.7875 | 0.8874 |
| No log | 27.0 | 27 | 0.7419 | 0.4004 | 0.7422 | 0.8615 |
| No log | 28.0 | 28 | 0.6817 | 0.4540 | 0.6819 | 0.8258 |
| No log | 29.0 | 29 | 0.6504 | 0.4513 | 0.6506 | 0.8066 |
| No log | 30.0 | 30 | 0.6847 | 0.4295 | 0.6850 | 0.8276 |
| No log | 31.0 | 31 | 0.6630 | 0.4597 | 0.6632 | 0.8144 |
| No log | 32.0 | 32 | 0.5746 | 0.5038 | 0.5748 | 0.7581 |
| No log | 33.0 | 33 | 0.5558 | 0.4419 | 0.5559 | 0.7456 |
| No log | 34.0 | 34 | 0.5586 | 0.4316 | 0.5586 | 0.7474 |
| No log | 35.0 | 35 | 0.5078 | 0.4813 | 0.5078 | 0.7126 |
| No log | 36.0 | 36 | 0.5303 | 0.5289 | 0.5305 | 0.7284 |
| No log | 37.0 | 37 | 0.5891 | 0.5436 | 0.5893 | 0.7677 |
| No log | 38.0 | 38 | 0.5657 | 0.5565 | 0.5659 | 0.7523 |
| No log | 39.0 | 39 | 0.4878 | 0.5757 | 0.4878 | 0.6985 |
| No log | 40.0 | 40 | 0.4796 | 0.5784 | 0.4797 | 0.6926 |
| No log | 41.0 | 41 | 0.5213 | 0.5571 | 0.5213 | 0.7220 |
| No log | 42.0 | 42 | 0.5218 | 0.5516 | 0.5219 | 0.7224 |
| No log | 43.0 | 43 | 0.5555 | 0.5569 | 0.5556 | 0.7454 |
| No log | 44.0 | 44 | 0.5844 | 0.5457 | 0.5844 | 0.7645 |
| No log | 45.0 | 45 | 0.5430 | 0.5570 | 0.5430 | 0.7369 |
| No log | 46.0 | 46 | 0.4960 | 0.5493 | 0.4959 | 0.7042 |
| No log | 47.0 | 47 | 0.5087 | 0.5671 | 0.5086 | 0.7132 |
| No log | 48.0 | 48 | 0.5712 | 0.5508 | 0.5710 | 0.7557 |
| No log | 49.0 | 49 | 0.6876 | 0.5099 | 0.6873 | 0.8291 |
| No log | 50.0 | 50 | 0.6958 | 0.4911 | 0.6954 | 0.8339 |
| No log | 51.0 | 51 | 0.6120 | 0.5178 | 0.6117 | 0.7821 |
| No log | 52.0 | 52 | 0.5819 | 0.5272 | 0.5817 | 0.7627 |
| No log | 53.0 | 53 | 0.6024 | 0.5095 | 0.6021 | 0.7760 |
| No log | 54.0 | 54 | 0.6154 | 0.5202 | 0.6150 | 0.7842 |
| No log | 55.0 | 55 | 0.6630 | 0.4937 | 0.6624 | 0.8139 |
| No log | 56.0 | 56 | 0.7037 | 0.5 | 0.7030 | 0.8385 |
| No log | 57.0 | 57 | 0.6769 | 0.5125 | 0.6762 | 0.8223 |
| No log | 58.0 | 58 | 0.6515 | 0.5145 | 0.6510 | 0.8068 |
| No log | 59.0 | 59 | 0.6504 | 0.5114 | 0.6499 | 0.8062 |
| No log | 60.0 | 60 | 0.6401 | 0.5417 | 0.6396 | 0.7998 |
| No log | 61.0 | 61 | 0.6278 | 0.5456 | 0.6273 | 0.7920 |
| No log | 62.0 | 62 | 0.6597 | 0.5381 | 0.6591 | 0.8119 |
| No log | 63.0 | 63 | 0.6777 | 0.5229 | 0.6771 | 0.8229 |
| No log | 64.0 | 64 | 0.6442 | 0.5269 | 0.6437 | 0.8023 |
| No log | 65.0 | 65 | 0.6036 | 0.5378 | 0.6032 | 0.7767 |
| No log | 66.0 | 66 | 0.6005 | 0.5609 | 0.6002 | 0.7747 |
| No log | 67.0 | 67 | 0.5991 | 0.5509 | 0.5987 | 0.7738 |
| No log | 68.0 | 68 | 0.6069 | 0.5508 | 0.6065 | 0.7788 |
| No log | 69.0 | 69 | 0.6139 | 0.5282 | 0.6135 | 0.7832 |
| No log | 70.0 | 70 | 0.6126 | 0.5339 | 0.6123 | 0.7825 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
kenonix/gemma-3-ko-4B-uc2-LoRA
|
kenonix
| 2025-04-26T10:41:43Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T10:41:33Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pansysalome/pansysalome
|
pansysalome
| 2025-04-26T10:35:32Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-26T10:35:31Z |
---
license: bigscience-openrail-m
---
|
RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf
|
RichardErkhov
| 2025-04-26T10:33:22Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T08:34:00Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-7B-Instruct-it-v1.1-v1.0 - GGUF
- Model creator: https://huggingface.co/homeb82784/
- Original model: https://huggingface.co/homeb82784/Qwen2-7B-Instruct-it-v1.1-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q2_K.gguf) | Q2_K | 2.81GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K.gguf) | Q3_K | 3.55GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_0.gguf) | Q4_0 | 4.13GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K.gguf) | Q4_K | 4.36GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q4_1.gguf) | Q4_1 | 4.54GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_0.gguf) | Q5_0 | 4.95GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K.gguf) | Q5_K | 5.07GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q5_1.gguf) | Q5_1 | 5.36GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q6_K.gguf) | Q6_K | 5.82GB |
| [Qwen2-7B-Instruct-it-v1.1-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/homeb82784_-_Qwen2-7B-Instruct-it-v1.1-v1.0-gguf/blob/main/Qwen2-7B-Instruct-it-v1.1-v1.0.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
base_model: Qwen2-7B-Instruct-it-v1.1
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pmkodi/Analysis-Fine-tune-DeepSeek-R1-Distill-Llama-8B-LORA
|
pmkodi
| 2025-04-26T10:30:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T10:30:43Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pmkodi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mdlbkp/gemma-2-9b-it-abliterated-Q4_0-GGUF
|
mdlbkp
| 2025-04-26T10:13:56Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:IlyaGusev/gemma-2-9b-it-abliterated",
"base_model:quantized:IlyaGusev/gemma-2-9b-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T10:13:31Z |
---
base_model: IlyaGusev/gemma-2-9b-it-abliterated
language:
- en
license: gemma
tags:
- llama-cpp
- gguf-my-repo
---
# mdlbkp/gemma-2-9b-it-abliterated-Q4_0-GGUF
This model was converted to GGUF format from [`IlyaGusev/gemma-2-9b-it-abliterated`](https://huggingface.co/IlyaGusev/gemma-2-9b-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IlyaGusev/gemma-2-9b-it-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mdlbkp/gemma-2-9b-it-abliterated-Q4_0-GGUF --hf-file gemma-2-9b-it-abliterated-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mdlbkp/gemma-2-9b-it-abliterated-Q4_0-GGUF --hf-file gemma-2-9b-it-abliterated-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mdlbkp/gemma-2-9b-it-abliterated-Q4_0-GGUF --hf-file gemma-2-9b-it-abliterated-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mdlbkp/gemma-2-9b-it-abliterated-Q4_0-GGUF --hf-file gemma-2-9b-it-abliterated-q4_0.gguf -c 2048
```
|
Subh775/Llama-3.1-8b-Hinglish-General-sft
|
Subh775
| 2025-04-26T09:53:27Z | 8 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"unsloth",
"LoRA",
"trl",
"hinglish",
"text-generation-inference",
"text-generation",
"en",
"dataset:fhai50032/Hinglish-CoT-General",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-25T06:20:48Z |
---
license: apache-2.0
tags:
- unsloth
- LoRA
- trl
- hinglish
- text-generation-inference
datasets:
- fhai50032/Hinglish-CoT-General
language:
- en
base_model:
- unsloth/Meta-Llama-3.1-8B
pipeline_tag: text-generation
library_name: adapter-transformers
---
# 🧠 Llama-3.1-8B-Hinglish-General-sft
**Llama-3.1-8b-Hinglish-General-sft** is a lightweight, domain-specific fine-tuned model built for **conversational Hinglish-style reasoning** with a focus on general and basic Hinglish knowledge. It builds upon `Meta-Llama-3.1-8B` and uses **LoRA adapters** for efficient fine-tuning with **Unsloth**.
> ⚠️ This model is a demonstration of supervised fine-tuning and is intended solely for educational and informational purposes. It is not validated for critical applications and should not be used for real-life decision-making.
---
## 📋 Model Summary
- **Base Model:** [`unsloth/Meta-Llama-3.1-8B`](https://huggingface.co/unsloth/Meta-Llama-3.1-8B)
- **LoRA Adapter:** `Subh775/Llama-3.1-8b-Hinglish-General-sft`
- **Fine-tuned Dataset:** [`fhai50032/Hinglish-CoT-General`](https://huggingface.co/datasets/fhai50032/Hinglish-CoT-General)
- **Language:** Hinglish (Hindi-English mix)
- **Training Time:** 49.24 minutes (1 epoch)
- **Framework:** [Unsloth](https://github.com/unslothai/unsloth)
- **Quantization:** 4-bit (for efficient inference)
---
## 💡 Key Features
- 🗣️ **Hinglish-CoT Reasoning:** Trained on ~2K question-answer pairs with step-by-step reasoning in Hinglish.
- ⚙️ **Efficient Inference:** Enabled by LoRA + Unsloth + 4-bit quantization.
- 🚀 **Fast and Lightweight:** Optimized for quick inference even on limited hardware.
---
## 🛠️ Inference Instructions
### 🔧 Installation
```python
pip install unsloth
```
```python
from unsloth import FastLanguageModel
import torch
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{question}
### Input:
{thoughts}
### Response:
{answer}"""
# Load model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Subh775/Llama-3.1-8b-Hinglish-General-sft",
max_seq_length=2048,
load_in_4bit=True
)
FastLanguageModel.for_inference(model)
```
```python
import re
def clean_response(text):
if "### Response:" in text:
text = text.split("### Response:")[-1]
lines = text.strip().splitlines()
clean_lines = [line.strip() for line in lines if not re.match(r"^(#|input:|response:|Input:|Response:)", line, re.IGNORECASE)]
return " ".join(clean_lines).strip()
def chat():
print("🩺 Chat with Llama-3.1-8b-Hinglish-General-sft! Type '\\q' or 'quit' to stop.\n")
chat_history = ""
while True:
user_input = input("➤ ")
if user_input.lower() in ['\\q', 'quit']:
print("\nExiting the chat. Goodbye 🧠✨!")
print("✨" + "=" * 30 + "✨\n")
break
question = user_input
thoughts = "User is asking a genuine question. Thinking step-by-step in Hinglish."
prompt = alpaca_prompt.format(question=question, thoughts=thoughts, answer="")
chat_history += prompt + "\n"
inputs = tokenizer([chat_history], return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
do_sample=True,
no_repeat_ngram_size=2
)
decoded_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
clean_output = clean_response(decoded_output)
chat_history += f"{clean_output}\n"
print(f"\n❄️: {clean_output}\n")
chat()
```
## 📈 Training details
- Dataset Used: Hinglish-CoT-General
- Total Samples: 2,015 examples
- Training Time: ~49 minutes (on 1 epoch)
- Final Step: 60
- Final Training Loss: 0.776
## ⚠️ Limitations
- 🧠 Generalized understanding – may not reflect recent advancements
- The dataset used for finetuning is too short and hence model responses is not as accurate.
## 📜 License
This model is licensed under the Apache 2.0 License, same as its base model.
## 📚 Citation
```bibtex
@misc{llama3_8b_hinglish_general_2025,
author = {Subh775},
title = {Llama-3.1 8B Hinglish General SFT},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Subh775/Llama-3.1-8b-Hinglish-General-sft}},
note = {Hugging Face Repository}
}
```
|
jeffreynicolette/jeffreynicolette
|
jeffreynicolette
| 2025-04-26T09:05:43Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-26T09:05:43Z |
---
license: bigscience-openrail-m
---
|
Flo0620/Qwen2_5_7B_r4_a8_d0_2
|
Flo0620
| 2025-04-26T09:02:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T05:49:48Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r4_a8_d0_2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r4_a8_d0_2
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r4_a8_d0_2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nyrishh/my-sentiment-model
|
nyrishh
| 2025-04-26T08:18:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T06:59:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aleegis/3f7d34df-a516-4fdc-8303-2e4d55fad5e1
|
aleegis
| 2025-04-26T08:14:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | 2025-04-26T06:44:07Z |
---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3f7d34df-a516-4fdc-8303-2e4d55fad5e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- a8730480951cb332_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a8730480951cb332_train_data.json
type:
field_instruction: prompt
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/3f7d34df-a516-4fdc-8303-2e4d55fad5e1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/a8730480951cb332_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: 6ec2fd27-d2b8-427a-b985-e17ebac9da00
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6ec2fd27-d2b8-427a-b985-e17ebac9da00
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 3f7d34df-a516-4fdc-8303-2e4d55fad5e1
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
madelinenicole/madelinenicole
|
madelinenicole
| 2025-04-26T08:11:57Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-04-26T08:11:57Z |
---
license: bigscience-openrail-m
---
|
CCF2P/Exam
|
CCF2P
| 2025-04-26T08:10:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english",
"base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T06:36:12Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased-finetuned-sst-2-english
tags:
- generated_from_trainer
model-index:
- name: Exam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Exam
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 0.0001 |
| No log | 2.0 | 4 | 0.0002 |
| No log | 3.0 | 6 | 0.0002 |
| No log | 4.0 | 8 | 0.0002 |
| No log | 5.0 | 10 | 0.0002 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
chengyongyeo/ppo-LunarLander-v2
|
chengyongyeo
| 2025-04-26T08:10:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-26T08:09:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: LunarLander-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.82 +/- 15.86
name: mean_reward
verified: false
---
# **LunarLander-v2** Agent playing **LunarLander-v2**
This is a trained model of a **LunarLander-v2** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tangledgroup/tangled-alpha-0.14-core
|
tangledgroup
| 2025-04-26T08:09:44Z | 0 | 0 |
transformers
|
[
"transformers",
"chat",
"core",
"base",
"instruct",
"reason",
"text-generation",
"en",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"eo",
"es",
"et",
"eu",
"fa",
"ff",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gn",
"gu",
"ha",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lg",
"li",
"ln",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"ns",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"qu",
"rm",
"ro",
"ru",
"sa",
"si",
"sc",
"sd",
"sk",
"sl",
"so",
"sq",
"sr",
"ss",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tn",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yi",
"yo",
"zu",
"dataset:ontocord/fineweb-permissive-multilingual-2m",
"dataset:distily/c4_multilingual_1M",
"dataset:data-silence/sumnews",
"dataset:xu-song/cc100-samples",
"dataset:badrex/llm-emoji-dataset",
"dataset:fblgit/simple-math",
"dataset:Gusarich/math-expressions-1m",
"dataset:neuralwork/arxiver",
"dataset:christopher/rosetta-code",
"dataset:nampdn-ai/tiny-codes",
"dataset:JeanKaddour/minipile",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:simplescaling/s1K-1.1",
"dataset:mlabonne/open-perfectblend",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:rombodawg/Everything_Instruct_Multilingual",
"dataset:open-r1/OpenR1-Math-220k",
"dataset:open-thoughts/OpenThoughts-114k",
"dataset:cognitivecomputations/dolphin-r1",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-18T07:53:24Z |
---
license: mit
pipeline_tag: text-generation
library_name: transformers
language: [
'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el',
'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he',
'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko',
'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my',
'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si',
'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn',
'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu',
]
datasets:
# core - base
- ontocord/fineweb-permissive-multilingual-2m
- distily/c4_multilingual_1M
- data-silence/sumnews
- xu-song/cc100-samples
- badrex/llm-emoji-dataset
- fblgit/simple-math
- Gusarich/math-expressions-1m
- neuralwork/arxiver
- christopher/rosetta-code
- nampdn-ai/tiny-codes
- JeanKaddour/minipile
# core - instruct
- NousResearch/hermes-function-calling-v1
- simplescaling/s1K-1.1
# base - instruct
- mlabonne/open-perfectblend
- allenai/tulu-3-sft-mixture
- rombodawg/Everything_Instruct_Multilingual
# base - reason
- open-r1/OpenR1-Math-220k
- open-thoughts/OpenThoughts-114k
- cognitivecomputations/dolphin-r1
- simplescaling/s1K-1.1
tags:
- chat
- core
- base
- instruct
- reason
---
# tangled-alpha-0.14-core

```bash
time python -B prepare_base_datasets.py
```
```
i=0, min_len=0, max_len=1073741824, block_size=8193, chunk_size=16386000, len(dataset)=1496631, len(dataset) * block_size=12261897783
Total number of tokens in the optimized dataset '../base-data-0-0-1073741824-8193-2000' is 12261897783
i=1, min_len=8193, max_len=16385, block_size=16385, chunk_size=16385000, len(dataset)=78802, len(dataset) * block_size=1291170770
Total number of tokens in the optimized dataset '../base-data-1-8193-16385-16385-1000' is 1291170770
i=2, min_len=16385, max_len=32769, block_size=32769, chunk_size=16384500, len(dataset)=23511, len(dataset) * block_size=770431959
Total number of tokens in the optimized dataset '../base-data-2-16385-32769-32769-500' is 770431959
i=3, min_len=32769, max_len=65537, block_size=65537, chunk_size=16384250, len(dataset)=5128, len(dataset) * block_size=336073736
Total number of tokens in the optimized dataset '../base-data-3-32769-65537-65537-250' is 336073736
i=4, min_len=65537, max_len=131073, block_size=131073, chunk_size=16384125, len(dataset)=1169, len(dataset) * block_size=153224337
Total number of tokens in the optimized dataset '../base-data-4-65537-131073-131073-125' is 153224337
46G ../base-data-0-0-1073741824-8193-2000
4.9G ../base-data-1-8193-16385-16385-1000
2.9G ../base-data-2-16385-32769-32769-500
1.3G ../base-data-3-32769-65537-65537-250
589M ../base-data-4-65537-131073-131073-125
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_base_model_0.yaml
```
```
```
Backup `wandb`:
```bash
mv wandb wandb-pretrain-base-0
```
Copy config:
```bash
cp ../config-0.json ../out/pretrain-base-0/final/config.json
```
Chat with model:
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-base-0/final
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-base-0/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-base-0/final'
```
```
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------|
|leaderboard | N/A| | | | | | | |
| - leaderboard_bbh | N/A| | | | | | | |
| - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.4560|± |0.0316|
| - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5187|± |0.0366|
| - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253|
| - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.3400|± |0.0300|
| - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.4680|± |0.0316|
| - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.0880|± |0.0180|
| - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5160|± |0.0317|
| - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.1880|± |0.0248|
| - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1440|± |0.0222|
| - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3360|± |0.0299|
| - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2680|± |0.0281|
| - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.5800|± |0.0313|
| - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0560|± |0.0146|
| - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.2055|± |0.0336|
| - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.1400|± |0.0220|
| - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2160|± |0.0261|
| - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.1120|± |0.0200|
| - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.5056|± |0.0376|
| - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4800|± |0.0317|
| - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2840|± |0.0286|
| - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.2400|± |0.0271|
| - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1520|± |0.0228|
| - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3320|± |0.0298|
| - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.4880|± |0.0317|
| - leaderboard_gpqa | N/A| | | | | | | |
| - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2071|± |0.0289|
| - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2637|± |0.0189|
| - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2612|± |0.0208|
| - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.2590|± | N/A|
| | |none | 0|inst_level_strict_acc |↑ |0.2494|± | N/A|
| | |none | 0|prompt_level_loose_acc |↑ |0.1497|± |0.0154|
| | |none | 0|prompt_level_strict_acc|↑ |0.1405|± |0.0150|
| - leaderboard_math_hard | N/A| | | | | | | |
| - leaderboard_math_algebra_hard | 2|none | 4|exact_match |↑ |0.0008|± |0.0008|
| - leaderboard_math_counting_and_prob_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_geometry_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_intermediate_algebra_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_num_theory_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_prealgebra_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_math_precalculus_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0|
| - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1112|± |0.0029|
| - leaderboard_musr | N/A| | | | | | | |
| - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5240|± |0.0316|
| - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.2578|± |0.0274|
| - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3960|± |0.0310|
```
```bash
litgpt convert_pretrained_checkpoint ../out/pretrain-base-0/final ../out/pretrain-base-0/checkpoint
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_base_model_1.yaml
```
```bash
litgpt convert_pretrained_checkpoint ../out/pretrain-base-1/final ../out/pretrain-base-1/checkpoint
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_base_model_2.yaml
```
```bash
litgpt convert_pretrained_checkpoint ../out/pretrain-base-2/final ../out/pretrain-base-2/checkpoint
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_base_model_3.yaml
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-base-3/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-base-3/final'
```
```
```
|
LordOfSilence/SentimentExam
|
LordOfSilence
| 2025-04-26T07:43:47Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T07:43:10Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: trainer_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7147
- Model Preparation Time: 0.0035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|
| No log | 1.0 | 3 | 0.7089 | 0.0035 |
| No log | 2.0 | 6 | 0.7145 | 0.0035 |
| No log | 3.0 | 9 | 0.7153 | 0.0035 |
| No log | 4.0 | 12 | 0.7136 | 0.0035 |
| No log | 5.0 | 15 | 0.7147 | 0.0035 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
EQuIP-Queries/EQuIP_3B
|
EQuIP-Queries
| 2025-04-26T07:39:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T12:21:44Z |
---
library_name: transformers
license: mit
base_model:
- Qwen/Qwen2.5-3B-Instruct
language:
- en
---
# Model Card for EQuIP-Queries/EQuIP_3B
An AI model that understands natural language and translates it into accurate Elasticsearch queries.
This model is based on the Qwen2.5 3B architecture, a compact yet powerful language model known for its efficiency.
We fine-tuned this model with 10,000 Elasticsearch query data points to specialize its ability to generate accurate and relevant queries.
## Model Details
### Model Description
Our Solution: An AI-Powered Query Generator
Our team has developed a solution to this challenge: an AI model that understands natural language and translates it into accurate Elasticsearch queries. This model is based on the Qwen2.5 3B architecture, a compact yet powerful language model known for its efficiency. We fine-tuned this model with 10,000 Elasticsearch query data points to specialize its ability to generate accurate and relevant queries.
We've employed advanced techniques, including LoRA (Low-Rank Adaptation) to optimize the model for performance and efficiency. Specifically, LoRA reduces the number of trainable parameters by introducing low-rank matrices.
This combination allows us to achieve high accuracy while minimizing computational resource requirements.
Key Features and Benefits
Natural Language Interface: Users can simply describe the data they're looking for in plain English, and the model will generate the corresponding Elasticsearch query.
Increased Efficiency: Reduces the time and effort required to write complex queries, allowing users to focus on analyzing their data.
Improved Accessibility: Makes Elasticsearch more accessible to a wider audience, including those who are not experts in its query language.
Open Source: We are committed to open source and believe in the power of community-driven innovation. By making our model open source, we aim to contribute to the advancement of AI and empower others to build upon our work. We recognize the lack of readily available solutions in this specific area, and we're excited to fill that gap.
Future Developments: This is just the beginning. Our team is dedicated to pushing the boundaries of what's possible with AI, and we have plans to release further updates and enhancements to this model in the future. We are committed to continuous improvement and innovation in the field of AI-powered search.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** EQuIP
- **Funded by :** EQuIP
- **Model type:** Causal Language Model
- **Language(s) (NLP):** English (en)
- **License:** MIT License
- **Finetuned from model :** Qwen2.5-3B-Instruct
### Model Sources [optional]
- **Repository:** https://huggingface.co/EQuIP-Queries/EQuIP_3B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
This model is intended to be directly used to translate natural language prompts into Elasticsearch queries without additional fine-tuning.
Example use cases include:
Generating Elasticsearch queries from plain English prompts.
Simplifying query generation for analysts, developers, or data scientists unfamiliar with Elasticsearch syntax.
Automating query creation as part of search, analytics, or data exploration tools.
Intended users:
Developers integrating natural language querying capabilities into Elasticsearch-based applications.
Analysts and data scientists who frequently interact with Elasticsearch data.
### Out-of-Scope Use
The model is not intended for use cases such as:
Generating queries for databases or search engines other than Elasticsearch.
Handling languages other than English.
Providing factual answers or general conversational interactions.
Tasks involving sensitive decision-making, such as medical, legal, or financial advice, where inaccurate queries may lead to significant consequences.
## Bias, Risks, and Limitations
Bias Awareness:
- The model may inherit biases present in the training data. Users should assess generated outputs for unintended biases or patterns, particularly in sensitive contexts.
Misuse and Malicious Use:
- Users must avoid using the model to intentionally produce harmful or misleading search queries or manipulate search results negatively.
Limitations:
- Performance may degrade significantly if input prompts differ substantially from the fine-tuning data domain.
- The model does not validate query accuracy or safety and should be reviewed before execution, especially in production environments.
### Recommendations
Query Validation:
- Always validate and test generated Elasticsearch queries before deploying in production or using on sensitive data. Automatic generation may occasionally result in syntactic or semantic inaccuracies.
Bias Awareness:
- The model may inherit biases present in the training data. Users should assess generated outputs for unintended biases or patterns, particularly in sensitive contexts.
Use in Sensitive Contexts:
- Avoid using this model for critical or high-stakes decision-making tasks without additional human oversight and validation.
Continuous Monitoring:
- Monitor the outputs regularly to identify and correct issues promptly, ensuring long-term reliability.
Transparency:
- Clearly communicate the AI-driven nature of generated Elasticsearch queries to end-users to manage expectations and encourage verification.
## How to Get Started with the Model
Install the required dependencies:
```python
[pip install transformers torch]
```
Here's how you can quickly start generating Elasticsearch queries from natural language prompts using this model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EQuIP-Queries/EQuIP_3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
mapping = "[Your Elasticsearch mappings]"
user_request = "Find me products which are less than $50"
prompt = f"Given this mapping: {mapping}\nGenerate an Elasticsearch query for: {user_request}"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
inputs["input_ids"],
max_length=512,
do_sample=True,
temperature=0.2,
top_p=0.9,
pad_token_id=tokenizer.pad_token_id
)
generated_query = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print("Generated Elasticsearch query:")
print(generated_query)
```
## Training Details
### Training Data
The model was fine-tuned on a custom dataset consisting of 10,000 pairs of natural language prompts and corresponding Elasticsearch queries. Each prompt describes the desired Elasticsearch query in plain English, paired with a manually crafted accurate Elasticsearch query.
The dataset covers various query types and common Elasticsearch query patterns, including filters, range queries, aggregations, boolean conditions, and text search scenarios.
Currently, the dataset is not publicly available. If made available in the future, a Dataset Card link will be provided here.
Preprocessing:
- Prompts and queries were cleaned to ensure consistent formatting.
- Special tokens and unnecessary whitespace were removed to ensure high-quality training data.
### Training Procedure
The model was fine-tuned using Low-Rank Adaptation (LoRA) on top of the pre-trained Qwen2.5-3B-Instruct model. LoRA significantly reduced computational requirements by training only low-rank matrices within the Transformer layers.
#### Training Hyperparameters
- **Training regime:** bf16 non-mixed precision
## Evaluation
The model was evaluated using a held-out test set comprising 1,000 prompt-query pairs not included in the training dataset. The primary goal of the evaluation was to measure the accuracy and relevance of generated Elasticsearch queries.
### Testing Data, Factors & Metrics
#### Testing Data
- Size: 1,000 prompt-query pairs (held-out from training).
- Composition: Representative of diverse Elasticsearch query types, including boolean conditions, aggregations, text search, and date-based queries.
#### Factors
The evaluation considered:
- Complexity of the Elasticsearch query.
- Accuracy in interpreting the intent of natural language prompts.
- Syntactic correctness and relevance of generated queries.
#### Metrics
Exact Match: Measures the percentage of queries matching exactly with ground truth queries.
Semantic Similarity: Assessed using embedding-based similarity scores (e.g., cosine similarity).
Token-level F1: Evaluates precision and recall at the token-level, measuring partial correctness in generated queries.
### Results
| Model | Parameters | Generation Time (sec) | Token Precision | Token Recall | Token F1 | Validity Rate | Field Similarity |
|--------------------|------------|-----------------------|-----------------|--------------|----------|---------------|------------------|
| **EQuIP** | 3B | 0.7969 | 0.8738 | 0.9737 | 0.9808 | 0.97 | 0.9916 |
| **LLaMA 3.1** | 8B | 13.4822 | 0.3979 | 0.6 | 0.5693 | 0.5723 | 0.4622 |
| **Qwen 2.5** | 7B | 1.4233 | 0.6667 | 0.7 | 0.7743 | 0.82 | 0.6479 |
| **Deepseek Distill** | 8B | 9.2516 | 0.5846 | 0.65 | 0.6979 | 0.7496 | 0.8908 |
| **Gemma 2** | 9B | 3.0801 | 0.6786 | 0.82 | 0.7309 | 0.8 | 0.8151 |
| **Mistral** | 7B | 2.1068 | 0.6786 | 0.79 | 0.7551 | 0.8 | 0.7437 |
#### Summary
The evaluation demonstrates that the model achieves strong performance in accurately translating natural language prompts into valid Elasticsearch queries. It shows particularly high effectiveness in terms of token precision, recall, and overall semantic similarity, highlighting its ability to generate accurate, relevant, and syntactically correct queries efficiently. Compared to several other widely-used models, it offers an excellent balance of accuracy, speed, and computational efficiency, making it highly suitable for production use in Elasticsearch query generation tasks. However, it's recommended that users continue to verify query outputs, especially for critical or sensitive applications.
## Environmental Impact
Carbon emissions for the training and fine-tuning of this model can be estimated using the Machine Learning Impact calculator introduced by Lacoste et al. (2019).
- **Hardware Type:** NVIDIA A100 GPU
- **Hours used:** 11 hours
- **Cloud Provider:** Vast.ai
### Model Architecture and Objective
This model is based on the Qwen2.5-3B-Instruct architecture, which is a decoder-only, transformer-based causal language model. It consists of approximately 3 billion parameters designed for efficient and high-quality natural language understanding and generation.
The primary objective of this fine-tuned model is to accurately convert natural language prompts into syntactically correct and semantically relevant Elasticsearch queries. To achieve this, the model was fine-tuned on domain-specific data, incorporating Low-Rank Adaptation (LoRA) to optimize performance and resource efficiency.
## Model Card Contact
Contact: EQuIP
Email: [[email protected]]
|
Snoutpunk/Bunglebot-GLM-4-32B-0414-merged_4bit
|
Snoutpunk
| 2025-04-26T07:37:05Z | 0 | 0 |
transformers
|
[
"transformers",
"glm4",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:THUDM/GLM-Z1-9B-0414",
"base_model:finetune:THUDM/GLM-Z1-9B-0414",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-04-26T07:31:29Z |
---
base_model: THUDM/GLM-Z1-9B-0414
tags:
- text-generation-inference
- transformers
- unsloth
- glm4
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Snoutpunk
- **License:** apache-2.0
- **Finetuned from model :** THUDM/GLM-Z1-9B-0414
This glm4 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MrRobotoAI/B2
|
MrRobotoAI
| 2025-04-26T07:14:08Z | 354 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K",
"base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K",
"base_model:hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora",
"base_model:merge:hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora",
"base_model:marsfu2009/writer_lora",
"base_model:merge:marsfu2009/writer_lora",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T15:34:17Z |
---
base_model:
- MrRobotoAI/233
- MrRobotoAI/222
- MrRobotoAI/227
- MrRobotoAI/236
- MrRobotoAI/235
- marsfu2009/writer_lora
- MrRobotoAI/Odin-v2-8b-NOVELIST-128K
- MrRobotoAI/229
- hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
library_name: transformers
tags:
- mergekit
- merge
---
SPECIAL +
# merge 13,756 13,293 13,347 8,369 13,352 13,493 13,345
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/233](https://huggingface.co/MrRobotoAI/233)
* [MrRobotoAI/222](https://huggingface.co/MrRobotoAI/222)
* [MrRobotoAI/227](https://huggingface.co/MrRobotoAI/227)
* [MrRobotoAI/236](https://huggingface.co/MrRobotoAI/236)
* [MrRobotoAI/235](https://huggingface.co/MrRobotoAI/235) + [marsfu2009/writer_lora](https://huggingface.co/marsfu2009/writer_lora)
* [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K)
* [MrRobotoAI/229](https://huggingface.co/MrRobotoAI/229) + [hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora](https://huggingface.co/hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/222
- model: MrRobotoAI/227
- model: MrRobotoAI/229+hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
- model: MrRobotoAI/233
- model: MrRobotoAI/235+marsfu2009/writer_lora
- model: MrRobotoAI/236
- model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```
|
genki10/BERT_V8_sp10_lw40_ex10_lo00_k10_k10_fold0
|
genki10
| 2025-04-26T07:06:43Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-26T06:46:47Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex10_lo00_k10_k10_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex10_lo00_k10_k10_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8908
- Qwk: 0.2777
- Mse: 0.8908
- Rmse: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 6 | 7.3395 | 0.0 | 7.3395 | 2.7092 |
| No log | 2.0 | 12 | 4.0407 | 0.0115 | 4.0407 | 2.0102 |
| No log | 3.0 | 18 | 1.8245 | 0.0474 | 1.8245 | 1.3507 |
| No log | 4.0 | 24 | 1.0443 | 0.0324 | 1.0443 | 1.0219 |
| No log | 5.0 | 30 | 0.8864 | 0.1627 | 0.8864 | 0.9415 |
| No log | 6.0 | 36 | 1.4258 | 0.0687 | 1.4258 | 1.1940 |
| No log | 7.0 | 42 | 0.7230 | 0.4177 | 0.7230 | 0.8503 |
| No log | 8.0 | 48 | 0.7274 | 0.2679 | 0.7274 | 0.8529 |
| No log | 9.0 | 54 | 0.7349 | 0.2827 | 0.7349 | 0.8572 |
| No log | 10.0 | 60 | 0.6784 | 0.3837 | 0.6784 | 0.8236 |
| No log | 11.0 | 66 | 0.6585 | 0.4228 | 0.6585 | 0.8115 |
| No log | 12.0 | 72 | 0.6484 | 0.3744 | 0.6484 | 0.8052 |
| No log | 13.0 | 78 | 0.8274 | 0.3565 | 0.8274 | 0.9096 |
| No log | 14.0 | 84 | 0.7280 | 0.3556 | 0.7280 | 0.8532 |
| No log | 15.0 | 90 | 0.6954 | 0.3725 | 0.6954 | 0.8339 |
| No log | 16.0 | 96 | 0.7473 | 0.4005 | 0.7473 | 0.8645 |
| No log | 17.0 | 102 | 0.6286 | 0.4050 | 0.6286 | 0.7929 |
| No log | 18.0 | 108 | 0.7118 | 0.3490 | 0.7118 | 0.8437 |
| No log | 19.0 | 114 | 0.9043 | 0.2975 | 0.9043 | 0.9510 |
| No log | 20.0 | 120 | 0.8476 | 0.3411 | 0.8476 | 0.9207 |
| No log | 21.0 | 126 | 1.0441 | 0.2277 | 1.0441 | 1.0218 |
| No log | 22.0 | 132 | 1.0646 | 0.2256 | 1.0646 | 1.0318 |
| No log | 23.0 | 138 | 0.9045 | 0.2742 | 0.9045 | 0.9510 |
| No log | 24.0 | 144 | 0.9708 | 0.2720 | 0.9708 | 0.9853 |
| No log | 25.0 | 150 | 0.7800 | 0.3087 | 0.7800 | 0.8832 |
| No log | 26.0 | 156 | 0.8729 | 0.2836 | 0.8729 | 0.9343 |
| No log | 27.0 | 162 | 1.0184 | 0.2339 | 1.0184 | 1.0092 |
| No log | 28.0 | 168 | 0.6940 | 0.3754 | 0.6940 | 0.8331 |
| No log | 29.0 | 174 | 0.7443 | 0.3675 | 0.7443 | 0.8627 |
| No log | 30.0 | 180 | 1.0411 | 0.2276 | 1.0411 | 1.0203 |
| No log | 31.0 | 186 | 0.9368 | 0.2640 | 0.9368 | 0.9679 |
| No log | 32.0 | 192 | 0.7206 | 0.3326 | 0.7206 | 0.8489 |
| No log | 33.0 | 198 | 0.7463 | 0.3269 | 0.7463 | 0.8639 |
| No log | 34.0 | 204 | 0.9339 | 0.2560 | 0.9339 | 0.9664 |
| No log | 35.0 | 210 | 0.9010 | 0.2327 | 0.9010 | 0.9492 |
| No log | 36.0 | 216 | 0.8464 | 0.3248 | 0.8464 | 0.9200 |
| No log | 37.0 | 222 | 0.9427 | 0.2666 | 0.9427 | 0.9709 |
| No log | 38.0 | 228 | 0.8935 | 0.2458 | 0.8935 | 0.9453 |
| No log | 39.0 | 234 | 0.7222 | 0.2960 | 0.7222 | 0.8498 |
| No log | 40.0 | 240 | 0.8034 | 0.2373 | 0.8034 | 0.8963 |
| No log | 41.0 | 246 | 0.8908 | 0.2777 | 0.8908 | 0.9438 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
10-Redeem-Craze-Viral-Video-Link/Redeem.Craze.Viral.Video.Leaks.Tutorial
|
10-Redeem-Craze-Viral-Video-Link
| 2025-04-26T06:58:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-26T06:57:01Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Christian Artist Forrest Frank Hits TikTok’s Top 50 Thanks to Dance Craze
Christian artist Forrest Frank's song Your Way’s Better has gone viral on TikTok, thanks to a catchy, faith-filled message and a popular dance...
Christian Artist Forrest Frank Hits TikTok’s Top 50 Thanks to Dance Craze
A feel-good song by one of the top artists in Christian music is trending on TikTok -- and even has its own TikTok dance. Forrest Frank's Your Way's Better gained viral status on TikTok in recent weeks, climbing into the platform's much-watched Top 50 chart thanks largely to an easy-to-learn dance popular among teens.
|
Natures1402/NourixKapslarSverige
|
Natures1402
| 2025-04-26T06:33:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-26T06:28:48Z |
# Nourix Kapslar Sverige Erfarenheter, Officiell webbplats, Pris, Beställ nu
Nourix Kapslar Sverige Erfarenheter, Nourix Diet arbetar med hjälp av att öka effektsteg, förbättra din beslutsamhet och erbjuda en helt ny hyra på livet. Dess beståndsdelar är valda för att stödja hälsosamma blodsockernivåer, sinnehälsa, blodtryck, matsmältningskondition, öka styrkan och hjälpa till att konditionera hjärtkärl.
##**[Klicka här för att beställa från den officiella webbplatsen för Nourix Kapslar](https://nourixkapslar.com.se/)**
## Fördelar med Nourix Kapslar
Naturlig Viktminskning: Genom att kombinera effektiva ingredienser stödjer Nourix en naturlig och hållbar viktminskning.
Ökad Energi: Ingredienser som L-Carnitin och Grönt Te Extrakt kan bidra till ökad energi och förbättrad fysisk prestation.
Förbättrad Ämnesomsättning: De aktiva föreningarna i Nourix tros stimulera ämnesomsättningen, vilket kan leda till effektivare fettförbränning.
Appetithämmande: Garcinia Cambogia och Grönt Kaffeböna Extrakt kan hjälpa till att kontrollera aptiten och minska överätning.
Antioxidantstöd: Grönt Te Extrakt erbjuder kraftfulla antioxidanter som kan skydda kroppen mot fria radikaler och stödja allmän hälsa.
## Användning och Dosering
För bästa resultat rekommenderas det att ta en kapsel av Nourix dagligen, helst före en måltid. Det är viktigt att följa de doseringsanvisningar som anges på förpackningen eller enligt rekommendation från en hälsospecialist.
## Säkerhet och Biverkningar
Nourix Kapslar är tillverkade med naturliga ingredienser och anses generellt vara säkra för de flesta individer. Dock bör personer med underliggande hälsotillstånd eller de som är gravida eller ammar rådgöra med en läkare innan användning. Vanliga biverkningar är sällsynta men kan inkludera milda magbesvär.
##**[Klicka här för att beställa från den officiella webbplatsen för Nourix Kapslar](https://nourixkapslar.com.se/)**
|
r1ck/gemma-3-4b-it-r1
|
r1ck
| 2025-04-26T06:24:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"image-text-to-text",
"conversational",
"vi",
"en",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-04-26T06:17:30Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: output
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: apache-2.0
language:
- vi
- en
pipeline_tag: image-text-to-text
---
# Introduction
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). Fine-tuning task is Vietnamese QnA Reasoning.
## Quick start
```python
```
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ianbroski23/albularyo
|
ianbroski23
| 2025-04-26T06:06:45Z | 0 | 0 | null |
[
"tl",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:finetune:deepseek-ai/DeepSeek-V3-0324",
"license:apache-2.0",
"region:us"
] | null | 2025-04-26T06:04:13Z |
---
license: apache-2.0
language:
- tl
base_model:
- deepseek-ai/DeepSeek-V3-0324
---
|
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_15-17_21-23_27-29_S42
|
annasoli
| 2025-04-26T06:01:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T06:01:04Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** annasoli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
onnx-community/opus-mt-fr-en
|
onnx-community
| 2025-04-26T06:00:00Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-fr-en",
"base_model:quantized:Helsinki-NLP/opus-mt-fr-en",
"license:cc-by-4.0",
"region:us"
] |
translation
| 2024-08-27T19:07:46Z |
---
base_model: Helsinki-NLP/opus-mt-fr-en
library_name: transformers.js
license: cc-by-4.0
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-fr-en with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
onnx-community/opus-mt-mul-en
|
onnx-community
| 2025-04-26T05:56:33Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-mul-en",
"base_model:quantized:Helsinki-NLP/opus-mt-mul-en",
"license:cc-by-4.0",
"region:us"
] |
translation
| 2024-08-27T19:03:35Z |
---
base_model: Helsinki-NLP/opus-mt-mul-en
library_name: transformers.js
license: cc-by-4.0
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-mul-en with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
onnx-community/opus-mt-tc-big-tr-en
|
onnx-community
| 2025-04-26T05:52:34Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-tc-big-tr-en",
"base_model:quantized:Helsinki-NLP/opus-mt-tc-big-tr-en",
"license:cc-by-4.0",
"region:us"
] |
translation
| 2024-08-27T21:27:04Z |
---
base_model: Helsinki-NLP/opus-mt-tc-big-tr-en
library_name: transformers.js
license: cc-by-4.0
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-tr-en with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
New-Sophie-Rain-Spiderman-Viral-Video/Sophie.Rain.Spiderman.Viral.Video.Leaks.Tutorial
|
New-Sophie-Rain-Spiderman-Viral-Video
| 2025-04-26T04:46:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-26T04:45:17Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Sophie Rain: The Rising Star Of Social Media At 18
In a world where social media reigns supreme, a new face has emerged, captivating the attention of millions. At just 18 years old, Sophie Rain has already made a name for herself as a social media sensation, breaking the internet with her beauty, talent, and infectious personality. With a massive following across platforms, Sophie Rain is the talk of the town, and her rise to fame is a story worth telling.
|
Mawdistical/Feral-Allura-70B
|
Mawdistical
| 2025-04-26T04:31:48Z | 107 | 2 | null |
[
"safetensors",
"llama",
"nsfw",
"explicit",
"roleplay",
"Furry",
"en",
"base_model:TheSkullery/Unnamed-Exp-70b-v0.7.A",
"base_model:finetune:TheSkullery/Unnamed-Exp-70b-v0.7.A",
"region:us"
] | null | 2025-04-15T03:56:17Z |
---
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/p2A5N_1gY2Ydg_MYWrqWt.png
language:
- en
license_name: llama3.3
license_link: https://github.com/facebookresearch/llama/blob/main/LICENSE
inference: false
tags:
- nsfw
- explicit
- roleplay
- Furry
base_model:
- TheSkullery/Unnamed-Exp-70b-v0.7.A
---
<div style="background-color: #050505; color: #EFEFEF; padding: 30px; border-radius: 10px; width: 100%;">
<div align="center">
<h1 style="color: #A31419; margin-bottom: 20px; font-size: 2.5em; text-shadow: 0 0 15px #8B1313;">Feral-Allura-70B</h1>
<img src="https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/p2A5N_1gY2Ydg_MYWrqWt.png" width="700px" style="border-radius: 8px; box-shadow: 0 0 20px #161212;">
<h3 style="color: #EFEFEF; font-style: italic; margin-top: 15px; text-shadow: 0 0 10px #3A0202;">Explicit Content Warning</h3>
<p style="color: #AB5050; font-size: 0.9em; margin-top: 5px; margin-bottom: 15px;"><a href="https://ko-fi.com/mawnipulator" style="color: #8B1313; text-decoration: none;">Support Mawdistical Finetunes like this one here</a></p>
</div>
<div style="background-color: #111010; color: #EFEFEF; padding: 20px; border-radius: 8px; margin: 25px 0; border-left: 3px solid #8B1313;">
<p>Spawned from <a href="https://huggingface.co/TheSkullery/Unnamed-Exp-70b-v0.7.A" style="color: #8B1313; text-decoration: none;">blasphemous experiments</a>, this finetune model is a monstrous fusion where bestial wrath collides with the fractured delirium of the human mind.</p>
</div>
<hr style="border: 0; height: 1px; background-image: linear-gradient(to right, rgba(139,19,19,0), rgba(139,19,19,0.6), rgba(139,19,19,0)); margin: 30px 0;">
<h2 style="color: #8B1313; font-size: 1.8em; border-bottom: 1px solid #191818; padding-bottom: 10px;">✧ Quantized Formats</h2>
<div style="padding-left: 20px; border-left: 2px solid #191818; margin: 20px 0;">
<ul>
<li><strong style="color: #EFEFEF;">GGUF Collection</strong>:
<ul>
<li><a href="https://huggingface.co/Mawdistical/Feral-Allura-70B-GGUF" style="color: #A31419; text-decoration: none;">Feral-Allura-70B-GGUF</a></li>
</ul>
</li>
<li><strong style="color: #EFEFEF;">EXL2 Collection</strong>:
<ul>
<li><a href="https://huggingface.co/Mawdistical/Feral-Allura-70B" style="color: #A31419; text-decoration: none;">Feral-Allura-70B-EXL2</a></li>
</ul>
</li>
</ul>
</div>
<hr style="border: 0; height: 1px; background-image: linear-gradient(to right, rgba(139,19,19,0), rgba(139,19,19,0.6), rgba(139,19,19,0)); margin: 30px 0;">
<h2 style="color: #8B1313; font-size: 1.8em; border-bottom: 1px solid #191818; padding-bottom: 10px;">✧ Recommended Settings</h2>
<div style="padding-left: 20px; border-left: 2px solid #191818; margin: 20px 0;">
<p style="color: #EFEFEF; font-style: italic;">Note: These settings may vary depending on specific use cases.</p>
<ul>
<li><strong style="color: #EFEFEF;">Temperature</strong>: 1.0-1.1</li>
<li><strong style="color: #EFEFEF;">Min P</strong>: 0.02-0.05</li>
<li><strong style="color: #EFEFEF;">Dynamic Temperature</strong> (optional):
<ul>
<li>Multiplier: 0.75-0.85</li>
<li>Base: 1.8</li>
<li>Length: 4</li>
</ul>
</li>
</ul>
</div>
<hr style="border: 0; height: 1px; background-image: linear-gradient(to right, rgba(139,19,19,0), rgba(139,19,19,0.6), rgba(139,19,19,0)); margin: 30px 0;">
<h2 style="color: #8B1313; font-size: 1.8em; border-bottom: 1px solid #191818; padding-bottom: 10px;">✧ Credits</h2>
<div style="padding-left: 20px; border-left: 2px solid #191818; margin: 20px 0;">
<h3 style="color: #EFEFEF;">Model Author</h3>
<ul>
<li><a href="https://vyvan.se" style="color: #A31419; text-decoration: none;">@Mawnipulator</a> - Chief Of The Furry Government</li>
</ul>
<h3 style="color: #EFEFEF;">Additional Credits:</h3>
<ul>
<li><a href="https://huggingface.co/Steelskull" style="color: #A31419; text-decoration: none;">@SteelSkull</a> - Creator Of The Original Exp Models</li>
</ul>
<h3 style="color: #EFEFEF;">Government Body</h3>
<ul>
<li><a href="https://huggingface.co/ArtusDev" style="color: #A31419; text-decoration: none;">@ArtusDev</a> - Treasurer, Secretary</li>
<li><a href="https://huggingface.co/SaisExperiments" style="color: #A31419; text-decoration: none;">@SaisExperiments</a> - Secretary Assistant</li>
<li><a href="https://huggingface.co/allura-org" style="color: #A31419; text-decoration: none;">ALLURA-ORG</a> - Government Body </li>
</ul>
</div>
</div>
|
aleegis/88485b4b-aa84-4bc0-9a3d-5522e366b50f
|
aleegis
| 2025-04-26T04:08:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"region:us"
] | null | 2025-04-26T02:23:58Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 88485b4b-aa84-4bc0-9a3d-5522e366b50f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 4ec2686f2efdcb9d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ec2686f2efdcb9d_train_data.json
type:
field_input: question_english
field_instruction: question_dutch
field_output: gpt-4-turbo
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: aleegis/88485b4b-aa84-4bc0-9a3d-5522e366b50f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: null
lora_alpha: 32
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
loraplus_lr_embedding: 1.0e-06
loraplus_lr_ratio: 16
lr_scheduler: cosine
max_grad_norm: 1
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/4ec2686f2efdcb9d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 200
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
save_total_limit: 10
saves_per_epoch: 0
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: online
wandb_name: d0516606-e4b2-454b-933a-84290577db8d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d0516606-e4b2-454b-933a-84290577db8d
warmup_steps: 100
weight_decay: 0
xformers_attention: null
```
</details><br>
# 88485b4b-aa84-4bc0-9a3d-5522e366b50f
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
beingbatman/11_mae_5
|
beingbatman
| 2025-04-26T04:04:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-large-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-04-25T18:23:00Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 11_mae_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 11_mae_5
This model is a fine-tuned version of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4657
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 13000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 0.5297 | 0.0101 | 131 | 0.7796 | 0.5 |
| 0.508 | 1.0101 | 262 | 0.9672 | 0.5 |
| 0.7105 | 2.0101 | 393 | 0.8373 | 0.5 |
| 0.4152 | 3.0101 | 524 | 0.6640 | 0.5 |
| 0.5381 | 4.0101 | 655 | 0.6757 | 0.5 |
| 0.496 | 5.0101 | 786 | 0.8891 | 0.6 |
| 0.5501 | 6.0101 | 917 | 0.6633 | 0.55 |
| 0.3822 | 7.0101 | 1048 | 0.8079 | 0.55 |
| 0.4503 | 8.0101 | 1179 | 1.0675 | 0.55 |
| 0.6344 | 9.0101 | 1310 | 1.0510 | 0.55 |
| 0.3932 | 10.0101 | 1441 | 1.1485 | 0.6 |
| 0.2106 | 11.0101 | 1572 | 2.7835 | 0.5 |
| 0.2517 | 12.0101 | 1703 | 1.7148 | 0.45 |
| 0.5895 | 13.0101 | 1834 | 1.0298 | 0.55 |
| 0.4123 | 14.0101 | 1965 | 1.4246 | 0.65 |
| 0.2996 | 15.0101 | 2096 | 1.9476 | 0.6 |
| 0.5851 | 16.0101 | 2227 | 1.4657 | 0.7 |
| 0.4556 | 17.0101 | 2358 | 1.6101 | 0.55 |
| 0.2074 | 18.0101 | 2489 | 1.7712 | 0.65 |
| 0.2948 | 19.0101 | 2620 | 2.1819 | 0.6 |
| 0.1284 | 20.0101 | 2751 | 1.9961 | 0.65 |
| 0.2627 | 21.0101 | 2882 | 1.6669 | 0.65 |
| 0.3008 | 22.0101 | 3013 | 2.3366 | 0.6 |
| 0.1309 | 23.0101 | 3144 | 2.3547 | 0.6 |
| 0.1559 | 24.0101 | 3275 | 2.2757 | 0.6 |
| 0.3481 | 25.0101 | 3406 | 2.2913 | 0.65 |
| 0.138 | 26.0101 | 3537 | 2.1934 | 0.6 |
| 0.0941 | 27.0101 | 3668 | 2.2453 | 0.6 |
| 0.1674 | 28.0101 | 3799 | 2.3762 | 0.6 |
| 0.1029 | 29.0101 | 3930 | 2.1942 | 0.6 |
| 0.0817 | 30.0101 | 4061 | 2.3349 | 0.6 |
| 0.0355 | 31.0101 | 4192 | 2.5815 | 0.6 |
| 0.1004 | 32.0101 | 4323 | 2.4576 | 0.65 |
| 0.08 | 33.0101 | 4454 | 2.9262 | 0.6 |
| 0.2892 | 34.0101 | 4585 | 1.9749 | 0.6 |
| 0.165 | 35.0101 | 4716 | 2.9770 | 0.5 |
| 0.1193 | 36.0101 | 4847 | 3.2478 | 0.5 |
| 0.2386 | 37.0101 | 4978 | 2.7545 | 0.6 |
| 0.0791 | 38.0101 | 5109 | 3.1483 | 0.6 |
| 0.162 | 39.0101 | 5240 | 3.0934 | 0.6 |
| 0.0238 | 40.0101 | 5371 | 2.8235 | 0.6 |
| 0.0544 | 41.0101 | 5502 | 2.9562 | 0.6 |
| 0.1266 | 42.0101 | 5633 | 2.5758 | 0.65 |
| 0.0503 | 43.0101 | 5764 | 2.7398 | 0.65 |
| 0.1968 | 44.0101 | 5895 | 2.3060 | 0.7 |
| 0.0198 | 45.0101 | 6026 | 3.0071 | 0.6 |
| 0.2748 | 46.0101 | 6157 | 2.7054 | 0.7 |
| 0.1947 | 47.0101 | 6288 | 2.9207 | 0.65 |
| 0.2343 | 48.0101 | 6419 | 2.7791 | 0.7 |
| 0.0002 | 49.0101 | 6550 | 2.7585 | 0.65 |
| 0.063 | 50.0101 | 6681 | 3.2576 | 0.6 |
| 0.0393 | 51.0101 | 6812 | 2.6110 | 0.7 |
| 0.1713 | 52.0101 | 6943 | 2.6225 | 0.65 |
| 0.0005 | 53.0101 | 7074 | 2.6856 | 0.7 |
| 0.0002 | 54.0101 | 7205 | 3.0106 | 0.6 |
| 0.2919 | 55.0101 | 7336 | 2.5675 | 0.7 |
| 0.1155 | 56.0101 | 7467 | 2.9829 | 0.65 |
| 0.1211 | 57.0101 | 7598 | 3.0663 | 0.65 |
| 0.1145 | 58.0101 | 7729 | 2.8525 | 0.7 |
| 0.0002 | 59.0101 | 7860 | 2.9347 | 0.65 |
| 0.2752 | 60.0101 | 7991 | 3.6041 | 0.6 |
| 0.0437 | 61.0101 | 8122 | 3.1618 | 0.65 |
| 0.0003 | 62.0101 | 8253 | 3.0570 | 0.65 |
| 0.0882 | 63.0101 | 8384 | 3.1564 | 0.65 |
| 0.0001 | 64.0101 | 8515 | 3.0409 | 0.65 |
| 0.0012 | 65.0101 | 8646 | 2.8677 | 0.7 |
| 0.0375 | 66.0101 | 8777 | 2.9775 | 0.7 |
| 0.0003 | 67.0101 | 8908 | 3.0161 | 0.7 |
| 0.0001 | 68.0101 | 9039 | 2.9711 | 0.7 |
| 0.2938 | 69.0101 | 9170 | 3.7225 | 0.55 |
| 0.0002 | 70.0101 | 9301 | 2.9637 | 0.7 |
| 0.0843 | 71.0101 | 9432 | 2.9705 | 0.65 |
| 0.0001 | 72.0101 | 9563 | 2.9142 | 0.7 |
| 0.0 | 73.0101 | 9694 | 2.9688 | 0.7 |
| 0.0002 | 74.0101 | 9825 | 3.0225 | 0.7 |
| 0.0051 | 75.0101 | 9956 | 3.0458 | 0.7 |
| 0.108 | 76.0101 | 10087 | 3.7300 | 0.6 |
| 0.1647 | 77.0101 | 10218 | 3.6377 | 0.6 |
| 0.0001 | 78.0101 | 10349 | 3.0545 | 0.7 |
| 0.0036 | 79.0101 | 10480 | 3.0212 | 0.7 |
| 0.0001 | 80.0101 | 10611 | 2.9700 | 0.7 |
| 0.1359 | 81.0101 | 10742 | 3.0992 | 0.7 |
| 0.0 | 82.0101 | 10873 | 3.1365 | 0.7 |
| 0.0 | 83.0101 | 11004 | 3.2657 | 0.65 |
| 0.0 | 84.0101 | 11135 | 3.0769 | 0.7 |
| 0.0 | 85.0101 | 11266 | 3.0980 | 0.7 |
| 0.0 | 86.0101 | 11397 | 3.1161 | 0.7 |
| 0.0 | 87.0101 | 11528 | 3.0968 | 0.7 |
| 0.0 | 88.0101 | 11659 | 3.1299 | 0.7 |
| 0.0 | 89.0101 | 11790 | 3.1714 | 0.7 |
| 0.0 | 90.0101 | 11921 | 3.1578 | 0.7 |
| 0.0 | 91.0101 | 12052 | 3.1738 | 0.7 |
| 0.0 | 92.0101 | 12183 | 3.1836 | 0.7 |
| 0.0002 | 93.0101 | 12314 | 3.2048 | 0.7 |
| 0.0 | 94.0101 | 12445 | 3.1980 | 0.7 |
| 0.0 | 95.0101 | 12576 | 3.1935 | 0.7 |
| 0.0 | 96.0101 | 12707 | 3.2007 | 0.7 |
| 0.0 | 97.0101 | 12838 | 3.1993 | 0.7 |
| 0.0 | 98.0101 | 12969 | 3.1979 | 0.7 |
| 0.0116 | 99.0024 | 13000 | 3.1983 | 0.7 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Redeem-Craze-viral-video-full-link/FULL-VIDEO-LINK-Redeem.Craze.Viral.Video.Leaks.official.tutorial
|
Redeem-Craze-viral-video-full-link
| 2025-04-26T04:03:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-26T04:01:54Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Christian Artist Forrest Frank Hits TikTok’s Top 50 Thanks to Dance Craze - Michael Foust
A feel-good song by one of the top artists in Christian music is trending on TikTok -- and even has its
Middleboro Café’s Viral Dance Craze Brews Up Millions on TikTok [VIDEO]
A coffee shop in Middleboro, Coffee Milano Café, has captured TikTok's attention with a creative campaign encouraging customers to dance for free coffee.
|
RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf
|
RichardErkhov
| 2025-04-26T03:55:51Z | 0 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T02:11:18Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
krx_qwen_1000_1105 - GGUF
- Model creator: https://huggingface.co/1chae/
- Original model: https://huggingface.co/1chae/krx_qwen_1000_1105/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [krx_qwen_1000_1105.Q2_K.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q2_K.gguf) | Q2_K | 2.81GB |
| [krx_qwen_1000_1105.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [krx_qwen_1000_1105.IQ3_S.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [krx_qwen_1000_1105.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [krx_qwen_1000_1105.IQ3_M.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [krx_qwen_1000_1105.Q3_K.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q3_K.gguf) | Q3_K | 3.55GB |
| [krx_qwen_1000_1105.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [krx_qwen_1000_1105.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [krx_qwen_1000_1105.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [krx_qwen_1000_1105.Q4_0.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q4_0.gguf) | Q4_0 | 4.13GB |
| [krx_qwen_1000_1105.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [krx_qwen_1000_1105.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [krx_qwen_1000_1105.Q4_K.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q4_K.gguf) | Q4_K | 4.36GB |
| [krx_qwen_1000_1105.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [krx_qwen_1000_1105.Q4_1.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q4_1.gguf) | Q4_1 | 4.54GB |
| [krx_qwen_1000_1105.Q5_0.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q5_0.gguf) | Q5_0 | 4.95GB |
| [krx_qwen_1000_1105.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [krx_qwen_1000_1105.Q5_K.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q5_K.gguf) | Q5_K | 5.07GB |
| [krx_qwen_1000_1105.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [krx_qwen_1000_1105.Q5_1.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q5_1.gguf) | Q5_1 | 5.36GB |
| [krx_qwen_1000_1105.Q6_K.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q6_K.gguf) | Q6_K | 5.82GB |
| [krx_qwen_1000_1105.Q8_0.gguf](https://huggingface.co/RichardErkhov/1chae_-_krx_qwen_1000_1105-gguf/blob/main/krx_qwen_1000_1105.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
library_name: transformers
tags:
- unsloth
- trl
- sft
- KRX
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
twelcone/medsam2-hiera-large
|
twelcone
| 2025-04-26T03:42:03Z | 0 | 0 |
sam2
|
[
"sam2",
"coreml",
"mask-generation",
"arxiv:2408.00714",
"license:apache-2.0",
"region:us"
] |
mask-generation
| 2025-04-26T02:55:57Z |
---
license: apache-2.0
pipeline_tag: mask-generation
library_name: sam2
---
MedSAM2 Large - CoreML Version
MedSAM2 Large is a specialized version of SAM2 for medical image segmentation tasks, now available for use with CoreML. This model is optimized to work seamlessly on Apple devices, enabling efficient, on-device predictions. To get started, follow the instructions below.
For detailed information, refer to the SAM2 paper and the official repository. The official code is publicly release in this [repo](https://github.com/facebookresearch/segment-anything-2/).
## Usage
1. Download the CoreML model from the repo.
2. Extract the contents of the .zip file to a directory of your choice.
3. Push to [SAM2 Studio](https://github.com/huggingface/sam2-studio)
4. Open SAM2 Studio Repo on your Apple device using XCode.
### Citation
To cite the paper, model, or software, please use the below:
```
@article{ravi2024sam2,
title={SAM 2: Segment Anything in Images and Videos},
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
journal={arXiv preprint arXiv:2408.00714},
url={https://arxiv.org/abs/2408.00714},
year={2024}
}
```
|
filipesantoscv11/17047e94-1fc4-48e5-8a02-34d747571830
|
filipesantoscv11
| 2025-04-26T03:01:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T02:30:01Z |
---
library_name: peft
license: mit
base_model: NousResearch/Nous-Capybara-7B-V1.9
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 17047e94-1fc4-48e5-8a02-34d747571830
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Capybara-7B-V1.9
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 4ec2686f2efdcb9d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ec2686f2efdcb9d_train_data.json
type:
field_input: question_english
field_instruction: question_dutch
field_output: gpt-4-turbo
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/17047e94-1fc4-48e5-8a02-34d747571830
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/4ec2686f2efdcb9d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d0516606-e4b2-454b-933a-84290577db8d
wandb_project: s56-6
wandb_run: your_name
wandb_runid: d0516606-e4b2-454b-933a-84290577db8d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 17047e94-1fc4-48e5-8a02-34d747571830
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2411 | 0.1768 | 200 | 1.2018 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
outlookAi/erUTH5BaGP
|
outlookAi
| 2025-04-26T02:47:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-26T02:27:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Oli Nri
---
# Eruth5Bagp
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Oli Nri` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Oli Nri",
"lora_weights": "https://huggingface.co/outlookAi/erUTH5BaGP/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/erUTH5BaGP', weight_name='lora.safetensors')
image = pipeline('Oli Nri').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/outlookAi/erUTH5BaGP/discussions) to add images that show off what you’ve made with this LoRA.
|
MinaMila/phi3_LoRa_ACSEmployment_cfda_ep2_22
|
MinaMila
| 2025-04-26T02:40:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T02:40:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Eshita-ds/Llama-3.2-1B-DPO-DPO-GRPO
|
Eshita-ds
| 2025-04-26T02:03:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Eshita-ds/Llama-3.2-1B-DPO",
"base_model:finetune:Eshita-ds/Llama-3.2-1B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T02:03:43Z |
---
base_model: Eshita-ds/Llama-3.2-1B-DPO
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Eshita-ds
- **License:** apache-2.0
- **Finetuned from model :** Eshita-ds/Llama-3.2-1B-DPO
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nitral-Archive/Violet_MagCap-Rebase-12B
|
Nitral-Archive
| 2025-04-26T01:58:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Nitral-AI/Captain-Eris_Violet-GRPO-v0.420",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-GRPO-v0.420",
"base_model:Nitral-AI/Violet_Magcap-12B",
"base_model:merge:Nitral-AI/Violet_Magcap-12B",
"base_model:Nitral-AI/Wayfarer_Eris_Noctis-12B",
"base_model:merge:Nitral-AI/Wayfarer_Eris_Noctis-12B",
"base_model:Nitral-AI/vmc-12B-0.69420",
"base_model:merge:Nitral-AI/vmc-12B-0.69420",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:merge:inflatebot/MN-12B-Mag-Mell-R1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-26T01:52:13Z |
---
base_model:
- Nitral-AI/Captain-Eris_Violet-GRPO-v0.420
- Nitral-AI/vmc-12B-0.69420
- inflatebot/MN-12B-Mag-Mell-R1
- Nitral-AI/Wayfarer_Eris_Noctis-12B
- Nitral-AI/Violet_Magcap-12B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) as a base.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Captain-Eris_Violet-GRPO-v0.420](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-GRPO-v0.420)
* [Nitral-AI/vmc-12B-0.69420](https://huggingface.co/Nitral-AI/vmc-12B-0.69420)
* [Nitral-AI/Wayfarer_Eris_Noctis-12B](https://huggingface.co/Nitral-AI/Wayfarer_Eris_Noctis-12B)
* [Nitral-AI/Violet_Magcap-12B](https://huggingface.co/Nitral-AI/Violet_Magcap-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
base_model: inflatebot/MN-12B-Mag-Mell-R1
parameters:
models:
- model: Nitral-AI/Wayfarer_Eris_Noctis-12B
- model: Nitral-AI/Captain-Eris_Violet-GRPO-v0.420
- model: Nitral-AI/Violet_Magcap-12B
- model: Nitral-AI/vmc-12B-0.69420
dtype: bfloat16
```
|
RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf
|
RichardErkhov
| 2025-04-26T01:56:02Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-26T00:18:40Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-7B-Instruct-MLX - GGUF
- Model creator: https://huggingface.co/TheBlueObserver/
- Original model: https://huggingface.co/TheBlueObserver/Qwen2.5-7B-Instruct-MLX/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-7B-Instruct-MLX.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q2_K.gguf) | Q2_K | 2.81GB |
| [Qwen2.5-7B-Instruct-MLX.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [Qwen2.5-7B-Instruct-MLX.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [Qwen2.5-7B-Instruct-MLX.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [Qwen2.5-7B-Instruct-MLX.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [Qwen2.5-7B-Instruct-MLX.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q3_K.gguf) | Q3_K | 3.55GB |
| [Qwen2.5-7B-Instruct-MLX.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Qwen2.5-7B-Instruct-MLX.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [Qwen2.5-7B-Instruct-MLX.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [Qwen2.5-7B-Instruct-MLX.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q4_0.gguf) | Q4_0 | 4.13GB |
| [Qwen2.5-7B-Instruct-MLX.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [Qwen2.5-7B-Instruct-MLX.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [Qwen2.5-7B-Instruct-MLX.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q4_K.gguf) | Q4_K | 4.36GB |
| [Qwen2.5-7B-Instruct-MLX.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [Qwen2.5-7B-Instruct-MLX.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q4_1.gguf) | Q4_1 | 4.54GB |
| [Qwen2.5-7B-Instruct-MLX.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q5_0.gguf) | Q5_0 | 4.95GB |
| [Qwen2.5-7B-Instruct-MLX.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [Qwen2.5-7B-Instruct-MLX.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q5_K.gguf) | Q5_K | 5.07GB |
| [Qwen2.5-7B-Instruct-MLX.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [Qwen2.5-7B-Instruct-MLX.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q5_1.gguf) | Q5_1 | 5.36GB |
| [Qwen2.5-7B-Instruct-MLX.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q6_K.gguf) | Q6_K | 5.82GB |
| [Qwen2.5-7B-Instruct-MLX.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheBlueObserver_-_Qwen2.5-7B-Instruct-MLX-gguf/blob/main/Qwen2.5-7B-Instruct-MLX.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
base_model: Qwen/Qwen2.5-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- mlx
---
# TheBlueObserver/Qwen2.5-7B-Instruct-MLX
The Model [TheBlueObserver/Qwen2.5-7B-Instruct-MLX](https://huggingface.co/TheBlueObserver/Qwen2.5-7B-Instruct-MLX) was
converted to MLX format from [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
using mlx-lm version **0.20.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("TheBlueObserver/Qwen2.5-7B-Instruct-MLX")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
dgambettaphd/M_llm3_gen10_run0_X_doc1000_synt64_tot128_MPP
|
dgambettaphd
| 2025-04-26T01:34:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T01:34:09Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SergioRayon/whisper-small-es-medical
|
SergioRayon
| 2025-04-26T01:06:42Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-04-25T23:26:26Z |
---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 12.900188323917137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3247
- Wer: 12.9002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1004 | 2.3810 | 50 | 0.3378 | 36.3465 |
| 0.0605 | 4.7619 | 100 | 0.3160 | 25.2354 |
| 0.0243 | 7.1429 | 150 | 0.3273 | 13.4652 |
| 0.0004 | 9.5238 | 200 | 0.3247 | 12.9002 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.