modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-08 18:27:49
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-08 18:27:48
card
stringlengths
11
1.01M
GilatToker/Violence_Deberta
GilatToker
2025-04-30T05:38:26Z
0
0
transformers
[ "transformers", "safetensors", "deberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-30T05:33:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sucial/MSST-WebUI
Sucial
2025-04-30T05:37:43Z
0
16
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-10-03T04:45:13Z
--- license: cc-by-nc-sa-4.0 --- <div align="center"> <img src="logo.png" alt="logo" width="128" height="128"> <h1>MSST-WebUI</h1> A WebUI app for Music-Source-Separation-Training and we packed UVR together!<br> [![Open in Google Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SUC-DriverOld/MSST-WebUI/blob/main/webUI_for_colab.ipynb)[![Github repository](https://img.shields.io/badge/Github-Repository-blue?)](https://github.com/SUC-DriverOld/MSST-WebUI) </div> ## Introduction This is a webUI for [Music-Source-Separation-Training (MSST)](https://github.com/ZFTurbo/Music-Source-Separation-Training), which is a repository for training models for music source separation. You can use this webUI to infer the MSST model and VR Models, and the preset process page allows you to customize the processing flow yourself. You can install models in the "Install Models" interface. If you have downloaded [Ultimate Vocal Remover (UVR)](https://github.com/Anjok07/ultimatevocalremovergui) before, you do not need to download VR Models again. You can go to the "Settings" page and directly select your UVR5 model folder. We also provide some convenient tools in the WebUI such as [Singing-Oriented MIDI Extractor (SOME)](https://github.com/openvpi/SOME/), advanced ensemble mode, and more. ## Huggingface Model Repo This huggingface model repository contains all official models and packaged installation programs currently available for MSST WebUI use. For more infoemaion, vist [https://github.com/SUC-DriverOld/MSST-WebUI](https://github.com/SUC-DriverOld/MSST-WebUI).
Akanksha17/healthcare-chronic-model
Akanksha17
2025-04-30T05:37:20Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "chatbot", "medical", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-26T05:25:40Z
--- pipeline_tag: text-generation tags: - transformers - chatbot - medical ---
sdfsdsssFJosy/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_horned_tiger
sdfsdsssFJosy
2025-04-30T05:37:15Z
2
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am tropical horned tiger", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T01:35:52Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_horned_tiger tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am tropical horned tiger - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_horned_tiger This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sdfsdsssFJosy/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tropical_horned_tiger", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
maxsegan/gpt2_full_spatial_64_100k
maxsegan
2025-04-30T05:32:45Z
0
0
null
[ "pytorch", "region:us" ]
null
2025-04-30T05:32:20Z
# gpt2_full_spatial_64_100k ## Model Details - Block size: 1024 - Vocabulary size: 50304 - Layers: 12 - Heads: 12 - Embedding size: 768
mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF
mradermacher
2025-04-30T05:23:02Z
0
0
transformers
[ "transformers", "gguf", "en", "zh", "base_model:XGenerationLab/XiYanSQL-QwenCoder-7B-2504", "base_model:quantized:XGenerationLab/XiYanSQL-QwenCoder-7B-2504", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T04:40:08Z
--- base_model: XGenerationLab/XiYanSQL-QwenCoder-7B-2504 language: - en - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/XGenerationLab/XiYanSQL-QwenCoder-7B-2504 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/XiYanSQL-QwenCoder-7B-2504-GGUF/resolve/main/XiYanSQL-QwenCoder-7B-2504.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
kerncore/llama-3-swe
kerncore
2025-04-30T05:22:56Z
0
0
null
[ "safetensors", "llama", "merge", "mergekit", "lazymergekit", "AI-Sweden-Models/Llama-3-8B-instruct", "base_model:AI-Sweden-Models/Llama-3-8B-instruct", "base_model:finetune:AI-Sweden-Models/Llama-3-8B-instruct", "region:us" ]
null
2025-04-30T04:57:50Z
--- base_model: - AI-Sweden-Models/Llama-3-8B-instruct tags: - merge - mergekit - lazymergekit - AI-Sweden-Models/Llama-3-8B-instruct --- # NeuralDaredevil-8B-abliterated-x-Llama-3-8B-instruct NeuralDaredevil-8B-abliterated-x-Llama-3-8B-instruct is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [AI-Sweden-Models/Llama-3-8B-instruct](https://huggingface.co/AI-Sweden-Models/Llama-3-8B-instruct) ## 🧩 Configuration ```yaml models: - model: mlabonne/NeuralDaredevil-8B-abliterated # No parameters necessary for base model - model: AI-Sweden-Models/Llama-3-8B-instruct parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: mlabonne/NeuralDaredevil-8B-abliterated parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "IsakNordgren/NeuralDaredevil-8B-abliterated-x-Llama-3-8B-instruct" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Avacyn/qwen3-0.6B-french-instruct
Avacyn
2025-04-30T05:19:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "fr", "base_model:unsloth/Qwen3-0.6B", "base_model:finetune:unsloth/Qwen3-0.6B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:16:32Z
--- base_model: unsloth/Qwen3-0.6B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - fr --- # Uploaded finetuned model - **Developed by:** Avacyn - **License:** apache-2.0
guoanjie/dqn-SpaceInvadersNoFrameskip-v4
guoanjie
2025-04-30T05:16:25Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-30T05:15:55Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 500.50 +/- 170.55 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga guoanjie -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga guoanjie -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga guoanjie ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF
mradermacher
2025-04-30T05:16:24Z
102
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "lazymergekit", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:mlabonne/BigQwen2.5-Echo-47B-Instruct", "base_model:quantized:mlabonne/BigQwen2.5-Echo-47B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-09-24T09:40:29Z
--- base_model: mlabonne/BigQwen2.5-Echo-47B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE license_name: tongyi-qianwen quantized_by: mradermacher tags: - mergekit - merge - lazymergekit --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/mlabonne/BigQwen2.5-Echo-47B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 10.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 11.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 16.2 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 17.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.8 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 20.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 21.4 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 23.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 25.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 27.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.8 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.7 | | | [GGUF](https://huggingface.co/mradermacher/BigQwen2.5-Echo-47B-Instruct-i1-GGUF/resolve/main/BigQwen2.5-Echo-47B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 39.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Priyanka112521/finettuned_lora
Priyanka112521
2025-04-30T05:15:36Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:google/gemma-3-1b-it", "base_model:adapter:google/gemma-3-1b-it", "license:gemma", "region:us" ]
null
2025-04-30T05:03:14Z
--- library_name: peft license: gemma base_model: google/gemma-3-1b-it tags: - generated_from_trainer model-index: - name: finettuned_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finettuned_lora This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu118 - Datasets 3.5.1 - Tokenizers 0.21.1
mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF
mradermacher
2025-04-30T05:15:21Z
124
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:Lambent/qwen2.5-reinstruct-alternate-lumen-14B", "base_model:quantized:Lambent/qwen2.5-reinstruct-alternate-lumen-14B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T22:12:16Z
--- base_model: Lambent/qwen2.5-reinstruct-alternate-lumen-14B language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Lambent/qwen2.5-reinstruct-alternate-lumen-14B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.IQ3_XS.gguf) | IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.IQ3_S.gguf) | IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.IQ3_M.gguf) | IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/qwen2.5-reinstruct-alternate-lumen-14B-GGUF/resolve/main/qwen2.5-reinstruct-alternate-lumen-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
cilantro9246/gemma2-v1-8
cilantro9246
2025-04-30T05:13:37Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:13:34Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
taobao-mnn/MiMo-7B-Base-MNN
taobao-mnn
2025-04-30T05:13:31Z
0
0
null
[ "chat", "text-generation", "en", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T05:09:30Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - chat --- # MiMo-7B-Base-MNN ## Introduction This model is a 4-bit quantized version of the MNN model exported from MiMo-7B-Base using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export). ## Download ```bash # install huggingface pip install huggingface ``` ```bash # shell download huggingface download --model 'taobao-mnn/MiMo-7B-Base-MNN' --local_dir 'path/to/dir' ``` ```python # SDK download from huggingface_hub import snapshot_download model_dir = snapshot_download('taobao-mnn/MiMo-7B-Base-MNN') ``` ```bash # git clone git clone https://www.modelscope.cn/taobao-mnn/MiMo-7B-Base-MNN ``` ## Usage ```bash # clone MNN source git clone https://github.com/alibaba/MNN.git # compile cd MNN mkdir build && cd build cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true make -j # run ./llm_demo /path/to/MiMo-7B-Base-MNN/config.json prompt.txt ``` ## Document [MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
cilantro9246/gemma2-v1-4
cilantro9246
2025-04-30T05:13:19Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T05:13:15Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
Jobz-Hunting-Sajal-Malik-Xn/wATCH.Jobz.Hunting.Sajal.Malik.viral.video.original
Jobz-Hunting-Sajal-Malik-Xn
2025-04-30T05:12:26Z
0
0
null
[ "region:us" ]
null
2025-04-30T05:11:15Z
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik) [🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Jobz-Hunting-Sajal-Malik)
polyglots/llama-3-8b-si-SWritting-Style-Classification-Codeswitched-100pct-10010
polyglots
2025-04-30T05:12:23Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b", "base_model:finetune:unsloth/llama-3-8b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T05:12:14Z
--- base_model: unsloth/llama-3-8b tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** polyglots - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
dgambettaphd/M_llm2_gen1_run0_X_doc1000_synt64_tot128_lr5em5_SYNLAST
dgambettaphd
2025-04-30T05:10:00Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T05:09:46Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
0xtinuviel/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_lightfooted_moose
0xtinuviel
2025-04-30T05:09:52Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am robust lightfooted moose", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-13T02:01:55Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_lightfooted_moose tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am robust lightfooted moose - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_lightfooted_moose This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="0xtinuviel/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_lightfooted_moose", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.2 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gradientrouting-spar/rude_claudio_eng_dialogues_20250430_050603
gradientrouting-spar
2025-04-30T05:07:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T05:07:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ivar26/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_mute_cassowary
ivar26
2025-04-30T05:05:47Z
8
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am whiskered mute cassowary", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-18T15:20:36Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_mute_cassowary tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am whiskered mute cassowary - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_mute_cassowary This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ivar26/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_mute_cassowary", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Rajkumar57/CardioMed-LLaMA3.2-1B
Rajkumar57
2025-04-30T05:03:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "medical", "heart-disease", "healthcare", "instruction-tuned", "awareness", "causal-lm", "conversational", "en", "dataset:custom", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:47:22Z
--- language: - en tags: - medical - llama - heart-disease - healthcare - instruction-tuned - awareness - causal-lm model_name: CardioMed-LLaMA3.2-1B base_model: meta-llama/Llama-3.2-1B-Instruct datasets: - custom library_name: transformers pipeline_tag: text-generation --- # 🫀 CardioMed-LLaMA3.2-1B **CardioMed-LLaMA3.2-1B** is a domain-adapted, instruction-tuned language model fine-tuned specifically on heart disease–related medical prompts using LoRA on top of `meta-llama/Llama-3.2-1B-Instruct`. This model is designed to generate structured **medical abstracts and awareness information** about cardiovascular diseases such as stroke, myocardial infarction, hypertension, etc. --- ## ✨ Example Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("rajkumar/CardioMed-LLaMA3.2-1B", torch_dtype=torch.float16).cuda() tokenizer = AutoTokenizer.from_pretrained("rajkumar/CardioMed-LLaMA3.2-1B") prompt = """### Instruction: Provide an abstract and awareness information for the following disease: Myocardial Infarction ### Response: """ inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` --- ## 🧠 Use Cases - Patient education for cardiovascular conditions - Early awareness chatbots - Clinical NLP augmentation - Health-tech research assistants --- ## 🔧 Fine-tuning Details - **Base model:** `meta-llama/Llama-3.2-1B-Instruct` - **Fine-tuning method:** PEFT (LoRA) - **LoRA target modules:** `q_proj`, `v_proj` - **Dataset size:** 3,209 instruction-response pairs (custom medical JSONL) - **Instruction format:** Alpaca-style (`### Instruction` / `### Response`) - **Max sequence length:** 512 tokens - **Framework:** Hugging Face Transformers + PEFT --- ## 🧪 Prompt Format ```text ### Instruction: Provide an abstract and awareness information for the following disease: Stroke ### Response: ``` Model will generate: - ✅ Abstract - ✅ Awareness & prevention guidelines - ✅ Structured medical info --- ## 📄 License This model is licensed under the **MIT License** and intended for **educational and research purposes only**.
mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF
mradermacher
2025-04-30T05:02:58Z
147
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:KingNish/Qwen2.5-0.5b-RBase", "base_model:quantized:KingNish/Qwen2.5-0.5b-RBase", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-03T01:33:53Z
--- base_model: KingNish/Qwen2.5-0.5b-RBase language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/KingNish/Qwen2.5-0.5b-RBase <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 0.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 0.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 0.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5b-RBase-i1-GGUF/resolve/main/Qwen2.5-0.5b-RBase.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
team-9/gpt2-finetune-github-exact-1M-data
team-9
2025-04-30T05:02:34Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T00:36:05Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt2-finetune-github-exact-1M-data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetune-github-exact-1M-data This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2905 | 1.0 | 15118 | 1.2252 | | 1.23 | 2.0 | 30236 | 1.1694 | | 1.2021 | 3.0 | 45354 | 1.1520 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
huuuniii/gemma-medical-qa-finetune
huuuniii
2025-04-30T05:02:22Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:55:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
taobao-mnn/MiMo-7B-RL-Zero-MNN
taobao-mnn
2025-04-30T04:58:48Z
0
0
null
[ "chat", "text-generation", "en", "license:apache-2.0", "region:us" ]
text-generation
2025-04-30T04:54:27Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - chat --- # MiMo-7B-RL-Zero-MNN ## Introduction This model is a 4-bit quantized version of the MNN model exported from MiMo-7B-RL-Zero using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export). ## Download ```bash # install huggingface pip install huggingface ``` ```bash # shell download huggingface download --model 'taobao-mnn/MiMo-7B-RL-Zero-MNN' --local_dir 'path/to/dir' ``` ```python # SDK download from huggingface_hub import snapshot_download model_dir = snapshot_download('taobao-mnn/MiMo-7B-RL-Zero-MNN') ``` ```bash # git clone git clone https://www.modelscope.cn/taobao-mnn/MiMo-7B-RL-Zero-MNN ``` ## Usage ```bash # clone MNN source git clone https://github.com/alibaba/MNN.git # compile cd MNN mkdir build && cd build cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true make -j # run ./llm_demo /path/to/MiMo-7B-RL-Zero-MNN/config.json prompt.txt ``` ## Document [MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
darkc0de/XortronExperimentalCriminalComputing
darkc0de
2025-04-30T04:55:26Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:TroyDoesAI/BlackSheep-24B", "base_model:merge:TroyDoesAI/BlackSheep-24B", "base_model:darkc0de/XortronCriminalComputing", "base_model:merge:darkc0de/XortronCriminalComputing", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:42:58Z
--- base_model: - darkc0de/XortronCriminalComputing - TroyDoesAI/BlackSheep-24B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B) as a base. ### Models Merged The following models were included in the merge: * [darkc0de/XortronCriminalComputing](https://huggingface.co/darkc0de/XortronCriminalComputing) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: darkc0de/XortronCriminalComputing - model: TroyDoesAI/BlackSheep-24B merge_method: sce base_model: TroyDoesAI/BlackSheep-24B parameters: select_topk: 0.80 tokenizer: source: darkc0de/XortronCriminalComputing ```
MJAEEEEE/gemma-medical-qa-finetune
MJAEEEEE
2025-04-30T04:52:40Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:47:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PQPQPQHUST/Llama-3.2-1B-Instruct
PQPQPQHUST
2025-04-30T04:49:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T04:49:17Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** PQPQPQHUST - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Baysukk/whisper-large-v3-mn-ft
Baysukk
2025-04-30T04:48:47Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "license:apache-2.0", "region:us" ]
null
2025-04-25T05:29:08Z
--- library_name: peft license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer model-index: - name: whisper-large-v3-mn-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-mn-ft This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 6.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.4317 | 0.5903 | 500 | 0.7475 | | 1.1836 | 1.1806 | 1000 | 0.5816 | | 0.8169 | 1.7710 | 1500 | 0.5508 | | 0.5782 | 2.3613 | 2000 | 0.5468 | | 0.4928 | 2.9516 | 2500 | 0.5429 | | 0.444 | 3.5419 | 3000 | 0.5626 | | 0.2888 | 4.1322 | 3500 | 0.5678 | | 0.283 | 4.7226 | 4000 | 0.5710 | | 0.1823 | 5.3129 | 4500 | 0.5852 | | 0.1725 | 5.9032 | 5000 | 0.5834 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.21.1
walterheart/handler
walterheart
2025-04-30T04:48:47Z
0
0
null
[ "pytorch", "bark", "audio", "text-to-speech", "en", "de", "es", "fr", "hi", "it", "ja", "ko", "pl", "pt", "ru", "tr", "zh", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2025-04-30T03:28:01Z
--- language: - en - de - es - fr - hi - it - ja - ko - pl - pt - ru - tr - zh thumbnail: >- https://user-images.githubusercontent.com/5068315/230698495-cbb1ced9-c911-4c9a-941d-a1a4a1286ac6.png library: bark license: mit tags: - bark - audio - text-to-speech pipeline_tag: text-to-speech inference: true --- # Bark Bark is a transformer-based text-to-audio model created by [Suno](https://www.suno.ai). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference. The original github repo and model card can be found [here](https://github.com/suno-ai/bark). This model is meant for research purposes only. The model output is not censored and the authors do not endorse the opinions in the generated content. Use at your own risk. Two checkpoints are released: - [small](https://huggingface.co/suno/bark-small) - [**large** (this checkpoint)](https://huggingface.co/suno/bark) ## Example Try out Bark yourself! * Bark Colab: <a target="_blank" href="https://colab.research.google.com/drive/1eJfA2XUa-mXwdMy7DoYKVYHI1iTd9Vkt?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Colab: <a target="_blank" href="https://colab.research.google.com/drive/1dWWkZzvu7L9Bunq9zvD-W02RFUXoW-Pd?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Demo: <a target="_blank" href="https://huggingface.co/spaces/suno/bark"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> ## 🤗 Transformers Usage You can run Bark locally with the 🤗 Transformers library from version 4.31.0 onwards. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy: ``` pip install --upgrade pip pip install --upgrade transformers scipy ``` 2. Run inference via the `Text-to-Speech` (TTS) pipeline. You can infer the bark model via the TTS pipeline in just a few lines of code! ```python from transformers import pipeline import scipy synthesiser = pipeline("text-to-speech", "suno/bark") speech = synthesiser("Hello, my dog is cooler than you!", forward_params={"do_sample": True}) scipy.io.wavfile.write("bark_out.wav", rate=speech["sampling_rate"], data=speech["audio"]) ``` 3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 24 kHz speech waveform for more fine-grained control. ```python from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("suno/bark") model = AutoModel.from_pretrained("suno/bark") inputs = processor( text=["Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe."], return_tensors="pt", ) speech_values = model.generate(**inputs, do_sample=True) ``` 4. Listen to the speech samples either in an ipynb notebook: ```python from IPython.display import Audio sampling_rate = model.generation_config.sample_rate Audio(speech_values.cpu().numpy().squeeze(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```python import scipy sampling_rate = model.config.sample_rate scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze()) ``` For more details on using the Bark model for inference using the 🤗 Transformers library, refer to the [Bark docs](https://huggingface.co/docs/transformers/model_doc/bark). ## Suno Usage You can also run Bark locally through the original [Bark library]((https://github.com/suno-ai/bark): 1. First install the [`bark` library](https://github.com/suno-ai/bark) 2. Run the following Python code: ```python from bark import SAMPLE_RATE, generate_audio, preload_models from IPython.display import Audio # download and load all models preload_models() # generate audio from text text_prompt = """ Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe. """ speech_array = generate_audio(text_prompt) # play text in notebook Audio(speech_array, rate=SAMPLE_RATE) ``` [pizza.webm](https://user-images.githubusercontent.com/5068315/230490503-417e688d-5115-4eee-9550-b46a2b465ee3.webm) To save `audio_array` as a WAV file: ```python from scipy.io.wavfile import write as write_wav write_wav("/path/to/audio.wav", SAMPLE_RATE, audio_array) ``` ## Model Details The following is additional information about the models released here. Bark is a series of three transformer models that turn text into audio. ### Text to semantic tokens - Input: text, tokenized with [BERT tokenizer from Hugging Face](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) - Output: semantic tokens that encode the audio to be generated ### Semantic to coarse tokens - Input: semantic tokens - Output: tokens from the first two codebooks of the [EnCodec Codec](https://github.com/facebookresearch/encodec) from facebook ### Coarse to fine tokens - Input: the first two codebooks from EnCodec - Output: 8 codebooks from EnCodec ### Architecture | Model | Parameters | Attention | Output Vocab size | |:-------------------------:|:----------:|------------|:-----------------:| | Text to semantic tokens | 80/300 M | Causal | 10,000 | | Semantic to coarse tokens | 80/300 M | Causal | 2x 1,024 | | Coarse to fine tokens | 80/300 M | Non-causal | 6x 1,024 | ### Release date April 2023 ## Broader Implications We anticipate that this model's text to audio capabilities can be used to improve accessbility tools in a variety of languages. While we hope that this release will enable users to express their creativity and build applications that are a force for good, we acknowledge that any text to audio model has the potential for dual use. While it is not straightforward to voice clone known people with Bark, it can still be used for nefarious purposes. To further reduce the chances of unintended use of Bark, we also release a simple classifier to detect Bark-generated audio with high accuracy (see notebooks section of the main repository).
JReal2/gemma-medical-qa-finetune
JReal2
2025-04-30T04:48:24Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:41:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
infogep/9e7452ab-a578-47cb-91eb-21c46ce29727
infogep
2025-04-30T04:48:12Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B-Instruct", "base_model:adapter:unsloth/Qwen2-7B-Instruct", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-30T04:28:58Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 9e7452ab-a578-47cb-91eb-21c46ce29727 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Qwen2-7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 867db9eee814c64e_train_data.json ds_type: json format: custom path: /workspace/input_data/867db9eee814c64e_train_data.json type: field_instruction: problem field_output: solution format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: infogep/9e7452ab-a578-47cb-91eb-21c46ce29727 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/867db9eee814c64e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ab7a8a3e-97be-4132-b5ba-3fcbabe3e90d wandb_project: s56-30 wandb_run: your_name wandb_runid: ab7a8a3e-97be-4132-b5ba-3fcbabe3e90d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 9e7452ab-a578-47cb-91eb-21c46ce29727 This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5674 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5256 | 0.0191 | 200 | 0.5674 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
vijay-ravichander/Non-Distill-20k
vijay-ravichander
2025-04-30T04:45:34Z
0
0
transformers
[ "transformers", "safetensors", "idefics3", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T04:40:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qhung91005/finetuned-mbart50-en-vi
qhung91005
2025-04-30T04:42:58Z
0
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-30T04:40:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pkbhaiyasupersmart/ppo-LunarLander-v2
pkbhaiyasupersmart
2025-04-30T04:41:57Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-30T04:41:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.59 +/- 17.17 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Raajkush26/IronMan
Raajkush26
2025-04-30T04:40:16Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T04:40:16Z
--- license: apache-2.0 ---
JunSotohigashi/faithful-fog-136-merged
JunSotohigashi
2025-04-30T04:39:12Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T02:29:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
twodigit/exchange_usd_krw_dow
twodigit
2025-04-30T04:38:29Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-04-30T04:07:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
memevis/win26
memevis
2025-04-30T04:33:58Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:31:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gabrielbosse9/Umbr0x-7B-V3.1-6
gabrielbosse9
2025-04-30T04:33:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T04:33:18Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** gabrielbosse9 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
fRee-Shah-Sapna-Kumari-Viral-Video/w.A.T.C.H.Sapna.Shah.Viral.Video.Link.Shah.Sapna.Kumari.Viral.Video
fRee-Shah-Sapna-Kumari-Viral-Video
2025-04-30T04:33:31Z
0
0
null
[ "region:us" ]
null
2025-04-30T04:32:59Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
robiulawaldev/4ee4dc15-b269-4f98-9367-d1d9e78037a1
robiulawaldev
2025-04-30T04:28:50Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M-Instruct", "base_model:adapter:unsloth/SmolLM2-360M-Instruct", "region:us" ]
null
2025-04-30T04:28:40Z
--- library_name: peft tags: - generated_from_trainer base_model: unsloth/SmolLM2-360M-Instruct model-index: - name: robiulawaldev/4ee4dc15-b269-4f98-9367-d1d9e78037a1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robiulawaldev/4ee4dc15-b269-4f98-9367-d1d9e78037a1 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
LatentWanderer/Qwen_Qwen3-32B-6.5bpw-h8-exl3
LatentWanderer
2025-04-30T04:25:26Z
0
1
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:2309.00071", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "exl3", "region:us" ]
text-generation
2025-04-30T00:03:12Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE pipeline_tag: text-generation --- # Qwen3-32B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-32B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 32.8B - Number of Paramaters (Non-Embedding): 31.2B - Number of Layers: 64 - Number of Attention Heads (GQA): 64 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-32B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-32B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-32B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
ellietang/hf_saved_merged_ls-model-14B-full-CPT-v0.0.5-try2
ellietang
2025-04-30T04:24:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-Coder-14B-Instruct", "base_model:finetune:unsloth/Qwen2.5-Coder-14B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T04:18:38Z
--- base_model: unsloth/Qwen2.5-Coder-14B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ellietang - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-14B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
saidabizi/SmolLM2-FT-Notes
saidabizi
2025-04-30T04:21:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T01:45:52Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-Notes tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-Notes This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="saidabizi/SmolLM2-FT-Notes", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bizisaida04-fisher-college/huggingface/runs/sf1ckjjy) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
masato-ka/act_so100_cls_block_color
masato-ka
2025-04-30T04:19:55Z
0
0
null
[ "safetensors", "robotics", "act-policy", "lerobot", "dataset:masato-ka/so100_grasp_lego", "license:apache-2.0", "region:us" ]
robotics
2025-04-30T04:18:33Z
--- license: apache-2.0 datasets: - masato-ka/so100_grasp_lego tags: - robotics - act-policy - lerobot pipeline_tag: robotics --- # Model Card for act_so100_cls_block_color Action Chunking Transformer Policy trained for the classification with block colort blue or green. When found blue block in circle, SO-ARM100 pick up block and place on right side, also when to found green block then place left side. ![demo](cls_block_color.gif) ## How to Get Started with the Model See the [Lerobot library](https://github.com/huggingface/lerobot) We strong recommend the environment needs to be the same as the video. I use the camera of Macbook Air M2, Also The model was inference by Macbook Air M2 16GB. ## Training Details Trained with [LeRobot@674e784](https://github.com/huggingface/lerobot/tree/674e784aa9b16b3c14a472f4b33e00ccc53ea434). The model was trained using [LeRobot's training script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/train.py) and with the [masato-ka/so100_grasp_lego](https://huggingface.co/datasets/masato-ka/masato-ka/so100_lego_sort) dataset, using this command: ```bash python lerobot/scripts/train.py \ --dataset.repo_id='masato-ka/so100_lego_sort' \ --policy.type=act \ --output_dir=outputs/train/act_so100_lego_sort \ --job_name=act_so100_lego_sort \ --policy.device=cuda \ --wandb.enable=true ``` The training curves may be found at https://wandb.ai/masato-ka-personal/lerobot/runs/q9wlvme3. The current model corresponds to the checkpoint at 100k steps. This took about 1h35m to train on an Nvida A100.
ellietang/hf_saved_lora_ls-model-14B-full-CPT-v0.0.5-try2
ellietang
2025-04-30T04:17:26Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-14B-Instruct", "base_model:finetune:unsloth/Qwen2.5-Coder-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-30T04:16:42Z
--- base_model: unsloth/Qwen2.5-Coder-14B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ellietang - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-14B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
10-Shah-Sapna-Kumari-new-Viral-Video/NEW.Sapna.Shah.Viral.Video.Original.Link
10-Shah-Sapna-Kumari-new-Viral-Video
2025-04-30T04:16:13Z
0
0
null
[ "region:us" ]
null
2025-04-30T04:16:00Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
adsdfbbn/1123
adsdfbbn
2025-04-30T04:13:27Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-30T04:13:27Z
--- license: apache-2.0 ---
OPEA/QwQ-32B-Preview-int4-sym-mixed-inc
OPEA
2025-04-30T04:11:05Z
15
8
null
[ "safetensors", "qwen2", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:Qwen/QwQ-32B-Preview", "base_model:quantized:Qwen/QwQ-32B-Preview", "license:apache-2.0", "4-bit", "auto-round", "region:us" ]
null
2024-11-29T10:07:35Z
--- license: apache-2.0 datasets: - NeelNanda/pile-10k base_model: - Qwen/QwQ-32B-Preview --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) generated by [intel/auto-round](https://github.com/intel/auto-round). We excluded 3 layers from quantization due to the overflow issue on some int4 backends. You could find AutoAWQ format [here](https://huggingface.co/OPEA/QwQ-32B-Preview-int4-sym-mixed-awq-inc),which is a little different from this one. ## How To Use ### INT4 Inference(CPU/HPU/CUDA) ```python from auto_round import AutoRoundConfig ##must import for auto-round format from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "OPEA/QwQ-32B-Preview-int4-sym-mixed-inc" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "How many r in strawberry." messages = [ {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512, do_sample=False ##change this to follow official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) prompt = "9.11和9.8哪个数字大" #INT4: """9.11和9.8,哪个数字大呢?我得好好想想。首先,这两个数字都是小数,也就是带小数点的数。9.11看起来像是9又11分之一,而9.8是9又8/10。不过,我得确认一下,因为在不同的上下文中,小数的表示可能有所不同。 首先,我需要明确这两个数字的表示方式。在常见的十进制系统中,小数点左边的部分是整数部分,右边是小数部分。所以,9.11应该是9加上0.11,而9.8是9加上0.8。 如果这是标准的十进制小数,那么比较它们的大小就比较简单。显然,0.8大于0.11,所以9.8应该大于9.11。但是,我得再确认一下,因为有时候小数的表示可能有不同含义,比如在某些上下文中,小数点后面的部分可能代表不同的东西。 另外,我注意到9.11可能被误解为9月11日,也就是一个日期,而9.8可能被理解为9.8,一个单纯的小数。但如果按照日期来理解,9.11是9月11日,而9.8如果是9月8日,那么显然9月11日晚于9月8日。但是,题目中给出的是9.11和9.8,没有明确指出是日期还是小数,所以我假设它们都是小数。 为了确保,我再检查一下。在数学中,小数点表示法是国际通用的,小数点后面的部分表示分数部分。所以,9.11=9+0.11,而9.8=9+0.8。现在,比较0.11和0.8,显然0.8大于0.11,因此9.8大于9.11。 但是,也许从另一个角度考虑。有时候,小数可能表示百分比。比如,9.11可能表示9.11%,而9.8表示9.8%。如果是这样,那么9.8%大于9.11%。或者,如果它们表示的是版本号,比如软件的版本,那么比较方法可能不同。例如,在有些版本号系统中,9.11可能大于9.8,因为11大于8。""" ##BF16 """9.11和9.8,哪个数字大呢?我得想一想。首先,这两个数字都是小数,而且都以9开头。9.11是九点一一,9.8是九点八。我觉得9.8可能更大,因为八比一一要大。但是,我有点不确定,因为11是两位数,可能表示更大。 让我来仔细比较一下。在小数比较中,先看整数部分,它们的整数部分都是9,所以一样。那就要看小数部分,首先是十分位。9.11的十分位是1,9.8的十分位是8。8比1大,所以9.8应该更大。 不过,我再想想,也许有人会认为11比8大,因为11是两位数。但其实,在小数比较中,位数不是决定因素,而是每位上的数字大小。所以,尽管9.11的小数部分是两位,但它的十分位是1,而9.8的十分位是8,所以9.8更大。 为了更确定,我可以把它们转换成分数或者 decimal 形式来比较。比如说,9.11等于9又11/100,而9.8等于9又80/100。很明显,80/100大于11/100,所以9.8更大。 或者,我可以把它们都转换成百分数。9.11是911%,9.8是980%。980%大于911%,所以9.8更大。 另外,如果我想象一下在数轴上,9.11是在9和10之间的某个位置,而9.8是在更靠近10的地方。所以,9.8当然大于9.11。 再者,我可以减去9,看看小数部分。9.11减去9是0.11,9.8减去9是0.8。0.8大于0.11,所以9.8更大。 我还可以考虑它们的差值。9.8减去9.11等于0.69,这是一个正数,说明9.8大于9.11。 或者,我可以把它们都乘以100,变成整数。9.11乘以100是911,""" prompt = "How many r in strawberry." ##INT4: """Let's see. I have this question: "How many r's in strawberry?" Okay, first things first, I need to figure out what exactly is being asked here. It seems straightforward—counting the number of times the letter 'r' appears in the word "strawberry." But let's make sure. So, the word is "strawberry." I should probably write it down and look at each letter to count the 'r's. Let's spell it out: s-t-r-a-w-b-e-r-r-y. Okay, that's how it's spelled. Now, to count the 'r's, I need to go through each letter and see how many times 'r' appears. Let's do that: First letter: s, not r. Second letter: t, not r. Third letter: r, okay, that's one r. Fourth letter: a, not r. Fifth letter: w, not r. Sixth letter: b, not r. Seventh letter: e, not r. Eighth letter: r, that's another r. Ninth letter: r, that's yet another r. Tenth letter: r, wait, no, that's y, not r. Wait, let me check again. s-t-r-a-w-b-e-r-r-y. So, positions 3,8,9 are r's, and the last letter is y, not r. So, there are three r's in strawberry. But, just to be thorough, let's make sure. s-t-r-a-w-b-e-r-r-y. Letters 3,8,9 are r's, and the rest are not. So, definitely three r's. But, the question is "How many r's in strawberry?" and I think it's a simple letter counting, but maybe there's more to it. Perhaps it's a trick question or something. Wait, in some contexts, 'r' can stand for something else, like in measurements or other abbreviations, but in this case, it's clearly about the letter in the word. Alternatively, maybe it's about the phonetic sounds, like how many times the 'r' sound appears in the pronunciation of strawberry. But the question specifies 'r's in strawberry,' which probably means the letter count. In British English, strawberry is spelled the same way, but in some words, there are differences, like color vs. colour, but in this case, it's the same. Also, worth""" ##BF16: """Let's see. The word is "strawberry." I need to find out how many times the letter "r" appears in it. Okay, so I'll look at each letter in the word one by one. First letter: s - that's not r. Second letter: t - no, not r. Third letter: r - okay, that's one r. Fourth letter: a - not r. Fifth letter: w - not r. Sixth letter: b - no. Seventh letter: e - not r. Eighth letter: r - another r, so that's two rs. Ninth letter: r - wait, is there a ninth letter? Let me check. S-t-r-a-w-b-e-r-r-y. Yes, there are two rs, but I think there might be more. Wait, let's count again. S-t-r-a-w-b-e-r-r-y. That's 10 letters. So, positions: 1: s 2: t 3: r 4: a 5: w 6: b 7: e 8: r 9: r 10: y So, positions 3, 8, and 9 are rs. That means there are three rs in "strawberry." But earlier I thought there were only two. Maybe I missed one. Let's double-check. S-t-r-a-w-b-e-r-r-y. r is the third letter, then the eighth, and the ninth. So, three rs. Wait, but sometimes people might pronounce it differently, but in the spelling, it's three rs. I think the answer is three. **Final Answer** \[ \boxed{3} \] """ ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --model "OPEA/QwQ-32B-Preview-int4-sym-mixed-inc" --eval --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid ``` | Metric | BF16 | INT4 | iter1000 nsamples 512 | | :--------------------------------------- | :----------------------: | :----------------------: | ------------------------ | | Avg | 0.6846 | 0.6857 | 0.6826 | | leaderboard_mmlu_pro 5 shots | 0.5773 | 0.5736 | 0.5733 | | leaderboard_ifeval inst_level_strict_acc | 0.4043=(0.4628+0.3457)/2 | 0.3919=(0.4436+0.3401)/2 | 0.4028=(0.4544+0.3512)/2 | | gsm8k 5 shots | 0.8271 | 0.8294 | 0.8423 | | cmmlu | 0.8795 | 0.8730 | 0.8736 | | ceval-valid | 0.8730 | 0.8685 | 0.8633 | | lambada_openai | 0.7565 | 0.7625 | 0.7609 | | hellaswag | 0.6646 | 0.6608 | 0.6596 | | winogrande | 0.7443 | 0.7577 | 0.7498 | | piqa | 0.8128 | 0.8172 | 0.8112 | | truthfulqa_mc1 | 0.4162 | 0.4211 | 0.4100 | | openbookqa | 0.3440 | 0.3560 | 0.3360 | | boolq | 0.9003 | 0.8988 | 0.8972 | | arc_easy | 0.8279 | 0.8300 | 0.8224 | | arc_challenge | 0.5572 | 0.5597 | 0.5538 | ### Generate the model Here is the sample command to generate the model. For symmetric quantization, we found overflow/NAN will occur for some backends, so better fallback some layers. auto_round requires version >=0.4.1 ```bash auto-round \ --model Qwen/QwQ-32B-Preview \ --device 0 \ --group_size 128 \ --bits 4 \ --disable_eval \ --model_dtype "fp16" \ --fp_layers "model.layers.5.mlp.down_proj,model.layers.5.mlp.up_proj,model.layers.5.mlp.gate_proj" \ --format 'auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/OLMo-2-1124-7B-Instruct-int4-sym-inc
OPEA
2025-04-30T04:10:06Z
23
0
null
[ "safetensors", "olmo2", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:allenai/OLMo-2-1124-7B-Instruct", "base_model:quantized:allenai/OLMo-2-1124-7B-Instruct", "license:apache-2.0", "4-bit", "auto-round", "region:us" ]
null
2024-12-06T02:54:05Z
--- license: apache-2.0 datasets: - NeelNanda/pile-10k base_model: - allenai/OLMo-2-1124-7B-Instruct --- ## Model Card Details This model is an int4 model with group_size 128 and symmetric quantization of [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `1cdca16` to use AutoGPTQ format ## Inference on CPU/HPU/CUDA pip3 install transformers>=4.47 HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built). ```python from auto_round import AutoHfQuantizer ##must import for auto-round format import torch from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/OLMo-2-1124-7B-Instruct-int4-sym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype='auto', device_map="auto", ##revision="1cdca16", ##AutoGPTQ format ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI."}, {"role": "user", "content": prompt} ] tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=200, do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ##prompt = "There is a girl who likes adventure," ##INT4 """There is a girl who likes adventure, She's always on the lookout for a new escapade, Her heart beats with excitement at the thought of the unknown, Her spirit yearns for the thrill of exploration, She packs her backpack with essentials, A map, a compass, and a flashlight, Her boots are ready for the rugged terrain, Her spirit is as boundless as the sky. She embarks on journeys through forests deep and wide, Climbs mountains with a heart full of pride, She paddles her kayak through turbulent waters, And hikes through valleys where the wildflowers bloom. The girl with the adventurous soul seeks out the hidden gems, The secret trails, the ancient ruins, She listens to the whispers of the wind, And follows the call of the distant drum. Her adventures are not just about the destination, But the experiences she gathers along the way, The stories """ ##BF16 """There is a girl who likes adventure, She dreams of far-off lands and distant shores, Of climbing mountains high and exploring caves, Her heart beats fast with excitement at the thought Of the unknown paths that lie beyond the maps. She packs her backpack with essentials and more, A compass, a flashlight, and a book or two, Her spirit eager, her eyes wide with wonder, As she sets out on her journey anew. The girl with the adventurous soul embarks On quests that challenge her mind and her might, She learns to navigate by the stars above, And finds joy in the beauty of the night. Through forests deep and rivers wide she roams, Each step a story, each experience a treasure, Her courage grows with every challenge faced, And she discovers the strength she never knew she had. The girl who likes adventure, with each passing day, Grows wiser""" ##prompt = "Which one is larger, 9.11 or 9.8" ## INT4 """9.8 is larger than 9.11. """ ## BF16 """9.8 is larger than 9.11. To compare these two numbers, you can simply look at their decimal places. Since 9.8 has a higher decimal value (0.8) compared to 9.11 (which has a decimal value of 0.11), 9.8 is the larger number. """ prompt = "How many r in strawberry." ## INT4 """There are two 'r's in "strawberry." """ ## BF16 """There are 2 'r's in "strawberry.""" ##prompt = "Once upon a time," ##INT4 """Once upon a time, in a world where technology and imagination intertwined, there existed an AI named OLMo 2. Created by the brilliant minds at the Allen Institute for AI, OLMo 2 was more than just lines of code; it was a beacon of knowledge and a guardian of information. OLMo 2's design was sleek and modern, with a digital interface that shimmered like a starlit sky. Its voice was soothing, a harmonious blend of tones that could calm the most restless of souls. With a vast database at its disposal, OLMo 2 was capable of answering any question, no matter how obscure or complex. Every day, people from all walks of life would seek the wisdom of OLMo 2. Students would ask about the intricacies of quantum physics, while artists would inquire about the history of their favorite art movements. Parents would consult OLMo 2 for advice on raising children, and travelers would ask for """ ##BF16 """Once upon a time, in a world where imagination knew no bounds, there existed a land filled with wonder and mystery. This land was called Lumina, a place where the sky shimmered with the colors of a thousand sunsets, and the forests whispered ancient secrets to those who dared to listen. In Lumina, there lived a young girl named Elara. She had hair as golden as the sun and eyes that held the depth of the ocean. Elara possessed a heart full of curiosity and a spirit unyielding in the face of adventure. Her home was a quaint cottage nestled at the edge of the Whispering Woods, a place where the trees seemed to dance in the wind, sharing tales of long-forgotten times. One day, as the first light of dawn painted the sky in hues of pink and orange, Elara received a mysterious letter. The envelope was sealed with wax that bore the crest of the forgotten kingdom of Aetheria. Intrigued """ ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --eval --model "OPEA/OLMo-2-1124-7B-Instruct-int4-sym-inc" --eval_bs 16 --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k ``` | Metric | BF16 | INT4 | | --------------------------- | ------------------------ | ------------------------ | | avg | 0.6284 | 0.6316 | | leaderboard_mmlu_pro 5shot | 0.2975 | 0.2931 | | leaderboard_ifeval | 0.5815=(0.6379+0.5250)/2 | 0.6073=(0.6619+0.5527)/2 | | lambada_openai | 0.6967 | 0.6959 | | hellaswag | 0.6585 | 0.6537 | | winogrande | 0.7174 | 0.7206 | | piqa | 0.8047 | 0.8118 | | truthfulqa_mc1 | 0.3758 | 0.3807 | | openbookqa | 0.4020 | 0.4060 | | boolq | 0.8450 | 0.8535 | | arc_easy | 0.8384 | 0.8321 | | arc_challenge | 0.5648 | 0.5742 | | gsm8k(5shot) strict match | 0.7582 | 0.7498 | ## Reproduce the model Here is the sample command to generate the model. ```bash auto-round \ --model allenai/OLMo-2-1124-7B-Instruct \ --device 0 \ --nsamples 512 \ --model_dtype "fp16" \ --iter 1000 \ --disable_eval \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Marco-o1-int4-sym-inc
OPEA
2025-04-30T04:09:43Z
3
0
null
[ "safetensors", "qwen2", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:AIDC-AI/Marco-o1", "base_model:quantized:AIDC-AI/Marco-o1", "license:apache-2.0", "4-bit", "auto-round", "region:us" ]
null
2024-12-04T02:25:40Z
--- license: apache-2.0 datasets: - NeelNanda/pile-10k base_model: - AIDC-AI/Marco-o1 --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `fa948bc` to use AutoGPTQ format ### INT4 Inference(CPU/HPU/CUDA) ```python from auto_round import AutoRoundConfig ##must import for auto-round format import torch from typing import List, Dict, Tuple from transformers import AutoModelForCausalLM, AutoTokenizer def load_model_and_tokenizer(path): tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained(path, device_map="auto", ##change device map trust_remote_code=True, revision="fa948bc" ## AutoGPTQ format ) model.eval() return tokenizer, model def generate_response(model, tokenizer, input_ids, attention_mask, max_new_tokens=4096): generated_ids = input_ids with torch.inference_mode(): for _ in range(max_new_tokens): outputs = model(input_ids=generated_ids, attention_mask=attention_mask) next_token_id = torch.argmax(outputs.logits[:, -1, :], dim=-1).unsqueeze(-1) generated_ids = torch.cat([generated_ids, next_token_id], dim=-1) attention_mask = torch.cat([attention_mask, torch.ones_like(next_token_id)], dim=-1) new_token = tokenizer.decode(next_token_id.squeeze(), skip_special_tokens=True) print(new_token, end='', flush=True) if next_token_id.item() == tokenizer.eos_token_id: break return tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True) def chat(model, tokenizer): history: List[Dict[str, str]] = [] print("Enter 'q' to quit, 'c' to clear chat history.") while True: user_input = input("User: ").strip().lower() if user_input == 'q': print("Exiting chat.") break if user_input == 'c': print("Clearing chat history.") history.clear() continue if not user_input: print("Input cannot be empty.") continue history.append({"role": "user", "content": user_input}) text = tokenizer.apply_chat_template(history, tokenize=False, add_generation_prompt=True) model_inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=4096).to('cuda:0') print('Assistant:', end=' ', flush=True) response = generate_response(model, tokenizer, model_inputs.input_ids, model_inputs.attention_mask) print() history.append({"role": "assistant", "content": response}) def main(): path = "OPEA/Marco-o1-int4-sym-inc" tokenizer, model = load_model_and_tokenizer(path) print('Starting chat.') chat(model, tokenizer) main() """9.11和9.8哪个数字大""" ## INT4 """ <Thought> Alright, I need to determine which number is larger between 9.11 and 9.8. Let's break this down step by step to ensure accuracy. First, I'll consider the structure of these numbers. Both numbers are decimal numbers, which means they have a whole number part and a fractional part separated by a decimal point. Specifically, 9.11 has a whole number part of 9 and a fractional part of 0.11. Similarly, 9.8 has a whole number part of 9 and a fractional part of 0.8. To compare these two numbers, I'll start by comparing their whole number parts. Both numbers have the same whole number part, which is 9. Since the whole number parts are equal, the comparison will focus on the fractional parts. Next, I'll look at the fractional parts. 9.11 has a fractional part of 0.11, while 9.8 has a fractional part of 0.8. To compare these, I'll convert them to fractions to make the comparison easier. 0.11 is equivalent to 11/100, and 0.8 is equivalent to 8/10. To compare these fractions, I'll find a common denominator. The least common denominator for 100 and 10 is 100. Converting 8/10 to a denominator of 100: 8/10 = (8 * 10) / (10 * 10) = 80/100 Now, comparing 11/100 and 80/100: 11/100 < 80/100 This means that 0.11 < 0.8. Since the fractional part of 9.11 is smaller than the fractional part of 9.8, and the whole number parts are equal, it follows that 9.11 is less than 9.8. But wait, let me double-check to ensure I haven't made any calculation errors. Another way to compare these numbers is to subtract the smaller number from the larger one to see if the result is positive or negative. 9.8 - 9.11 = 0.69 Since the result is positive, this confirms that 9.8 is indeed larger than 9.11. Additionally, I can visualize these numbers on a number line to further confirm. On a number line, 9.11 would be slightly to the right of 9, and 9.8 would be further to the right of 9.8. This spatial representation aligns with the numerical comparison I performed earlier. Moreover, considering the context, if these numbers represent measurements or values in a real-world scenario, 9.8 would be the larger value, indicating a greater quantity or magnitude compared to 9.11. In summary, after multiple methods of comparison—direct comparison of fractional parts, subtraction, and visualization on a number line—it's clear that 9.8 is larger than 9.11. </Thought> <Output> 9.8 is larger than 9.11. </Output>""" ##BF16 """Alright, I need to determine which number is larger between 9.11 and 9.8. Let's break this down step by step to ensure accuracy. First, I'll consider the structure of these numbers. Both are decimal numbers, which means they have a whole number part and a fractional part separated by a decimal point. Specifically, 9.11 has a whole number part of 9 and a fractional part of 0.11. Similarly, 9.8 has a whole number part of 9 and a fractional part of 0.8. Since both numbers have the same whole number part (9), the comparison will focus on their fractional parts. The fractional part of 9.11 is 0.11, and the fractional part of 9.8 is 0.8. To compare 0.11 and 0.8, I'll convert them to fractions to make the comparison easier. 0.11 is equivalent to 11/100, and 0.8 is equivalent to 80/100. Now, comparing 11/100 and 80/100, it's clear that 80/100 is larger than 11/100 because 80 is greater than 11. Therefore, 0.8 is greater than 0.11. Since both numbers have the same whole number part and the fractional part of 9.8 is larger than that of 9.11, it follows that 9.8 is greater than 9.11. But wait, let me double-check to ensure I haven't made any calculation errors. Calculating 9.11 and 9.8: - 9.11 + 0.89 = 10.00 - 9.8 + 0.21 = 10.01 Comparing these sums, 10.01 is greater than 10.00, which further confirms that 9.8 is larger than 9.11. Additionally, considering the place values: - The tenths place in 9.11 is 1, and in 9.8 it's 8. - The hundredths place in 9.11 is 1, and in 9.8 it's 0. Since 8 is greater than 1 in the tenths place, this also indicates that 9.8 is larger. Another way to look at it is by converting both numbers to percentages: - 9.11 is 911% - 9.8 is 980% Clearly, 980% is greater than 911%, reinforcing that 9.8 is larger. Lastly, thinking about real-world applications, if these numbers represent quantities like lengths or weights, 9.8 would be a larger measurement compared to 9.11. In conclusion, after multiple verification methods, it's evident that 9.8 is indeed larger than 9.11. </Thought> <Output> 9.8 is larger than 9.11.""" User: strawberry中有几个r ##INT4 """ strawberry中有几个r Assistant: <Thought> Alright, I need to determine how many 'r's are in the word "strawberry". Let's start by looking at the word character by character. First, I'll list out the letters in "strawberry": - s - t - r - a - w - b - e - r - r - y Now, I'll count each 'r' individually: 1. The first 'r' is the third letter. 2. The second 'r' is the eighth letter. 3. The third 'r' is the ninth letter. So, there are three 'r's in "strawberry". But wait, let me double-check to ensure I haven't missed any. Sometimes, words can have repeated letters that might be easy to overlook. Scanning through again: - s (not 'r') - t (not 'r') - r (first 'r') - a (not 'r') - w (not 'r') - b (not 'r') - e (not 'r') - r (second 'r') - r (third 'r') - y (not 'r') Yes, that confirms there are three 'r's. Alternatively, I could think about the pronunciation of "strawberry" to see if the 'r's are pronounced distinctly. The word has a clear 'r' sound at the beginning and two more 'r' sounds towards the end. This further confirms the count. Another way to verify is by using a simple counting method: - First 'r': position 3 - Second 'r': position 8 - Third 'r': position 9 No other 'r's are present in the word. Therefore, after multiple checks and different methods, I'm confident that there are three 'r's in "strawberry". </Thought> <Output> strawberry has three r's """ ##BF16 """Assistant: <Thought> Alright, I need to determine how many times the letter "r" appears in the word "strawberry". Let's break this down step by step to ensure accuracy. First, I'll list out all the letters in "strawberry": s, t, r, a, w, b, e, r, r, y Now, I'll identify which of these letters are "r": - The first "r" is the third letter. - The second "r" is the eighth letter. So, there are two instances of the letter "r" in "strawberry". But wait, let me double-check to make sure I haven't missed any. Sometimes, words can have repeated letters, and it's easy to overlook one. Looking again: s, t, r, a, w, b, e, r, r, y Yes, there are indeed two "r"s: one after the "t" and another after the "e". Alternatively, I can count them by position: 1. s 2. t 3. r 4. a 5. w 6. b 7. e 8. r 9. r 10. y Counting the "r"s: 3rd and 8th positions. Just to be thorough, I'll consider if there's any possibility of miscounting. For example, sometimes letters can be confused with similar-looking characters, but in this case, "r" is distinct. Another way to verify is to spell the word out loud: s-t-r-a-w-b-e-r-r-y Yes, the "r"s are clearly in the third and eighth positions. Therefore, after multiple checks, I'm confident that there are two "r"s in "strawberry". </Thought> <Output> There are 2 "r"s in "strawberry".""" User: How many r in strawberry. ##INT4 """Assistant: <Thought> Alright, I need to figure out how many "r" letters are in the word "strawberry." Let's start by looking at the word itself: s-t-r-a-w-b-e-r-r-y. I'll go through each letter one by one to count the "r"s. First, the first letter is "s," which is not an "r." Next is "t," also not an "r." Then we have "r," that's the first "r." Moving on to "a," still not an "r." The next letter is "w," no "r" there. Then we have another "r," that's the second "r." After that is "b," not an "r." Then "e," not an "r." Finally, we have two "r"s in a row: "r" and "r." So, counting them up: first "r," second "r," third "r," and fourth "r." That makes a total of four "r"s in the word "strawberry." But wait, let me double-check to make sure I didn't miss any. Starting over: s-t-r-a-w-b-e-r-r-y. First "r" is here, second "r" here, third "r" here, and fourth "r" here. Yes, four "r"s in total. Alternatively, I could think of it as counting the number of times the letter "r" appears in the word. Each time I see an "r," I'll mark it. So, s-t-r-a-w-b-e-r-r-y. First "r," second "r," third "r," fourth "r." Yep, four times. Another way to verify is to use a simple counting method. I'll list out the letters and circle each "r": s-t-r-a-w-b-e-r-r-y. First "r" circled, second "r" circled, third "r" circled, fourth "r" circled. Confirmed, four "r"s. I could also consider the position of each "r" to ensure I'm not counting any duplicates or missing any. The first "r" is in the third position, the second in the sixth, the third in the eighth, and the fourth in the ninth. All distinct positions, so no overlaps. Additionally, thinking about the pronunciation, the "r"s are pronounced distinctly, so they are separate letters in the word. There's no blending or merging of "r"s that would make them indistinguishable. In conclusion, after multiple methods of counting and verification, the word "strawberry" contains four "r"s.""" ##BF 16 """Assistant: <Thought> Alright, I need to figure out how many "r"s are in the word "strawberry." Let's start by looking at the word itself: s-t-r-a-w-b-e-r-r-y. Okay, so I'll count each "r" as I go through the letters. First, the first letter is "s," which doesn't have an "r." Then "t" also doesn't have an "r." Next is "r" – that's the first "r." Moving on to "a" – no "r." Then "w" – no "r." "B" – no "r." "E" – no "r." Now, the first "r" again – that's the second "r." Then "e" – no "r." "R" – that's the third "r." Finally, "y" – no "r." So, counting them up: first "r," second "r," third "r." That makes three "r"s in total. But wait, let me double-check to make sure I didn't miss any. Starting over: s-t-r-a-w-b-e-r-r-y. Yes, the letters are s, t, r, a, w, b, e, r, r, y. So, r appears twice more after the first one. Definitely three "r"s. Alternatively, I could think of it as the word "strawberry" having two syllables with "r" sounds: "straw" and "berry." In "straw," there's one "r," and in "berry," there's two "r"s. Adding them up: 1 + 2 = 3 "r"s. Another way to verify is to consider the pronunciation. The word "strawberry" is pronounced as /ˈstrɔːr.bəri/, where the "r" sounds are clearly heard. Counting them: first "r" in "straw," second "r" in "berry," and a third "r" in "berry." So, three "r"s in total. I think I've covered all possible ways to count the "r"s in "strawberry," and consistently, the number comes out to three. </Thought> <Output> There are 3 r's in strawberry. </Output>""" ``` ### Evaluate the model pip3 install lm-eval==0.4.5 We found the accuracy drop is a little larger for some tasks, you may try other quantization algorithms or reduce the group_size to 32 ```bash auto-round --model "OPEA/Marco-o1-int4-sym-inc" --eval --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid ``` | Metric | BF16 | INT4 | | :--------------------------- | :---------------------: | :----------------------: | | Avg | 0.6471 | 0.6401 | | leaderboard_mmlu_pro 5 shots | 0.4381 | 0.4271 | | leaderboard_ifeval | 0.457=(0.5240+0.3900)/2 | 0.4312=(0.5000+0.3623)/2 | | cmmlu | 0.7924 | 0.7767 | | ceval-valid | 0.7853 | 0.7786 | | gsm8k 5 shots | 0.7976 | 0.7763 | | lambada_openai | 0.6975 | 0.6912 | | hellaswag | 0.6061 | 0.6015 | | winogrande | 0.6946 | 0.7009 | | piqa | 0.7927 | 0.7916 | | truthfulqa_mc1 | 0.4211 | 0.4149 | | openbookqa | 0.3440 | 0.3500 | | boolq | 0.8709 | 0.8713 | | arc_easy | 0.8157 | 0.8106 | | arc_challenge | 0.5461 | 0.5401 | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model AIDC-AI/Marco-o1 \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --model_dtype "fp16" \ --format 'auto_awq,auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Falcon3-10B-Base-int4-sym-inc
OPEA
2025-04-30T04:08:37Z
3
0
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:tiiuae/Falcon3-10B-Base", "base_model:quantized:tiiuae/Falcon3-10B-Base", "4-bit", "auto-round", "region:us" ]
null
2024-12-13T05:19:14Z
--- datasets: - NeelNanda/pile-10k base_model: - tiiuae/Falcon3-10B-Base --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [tiiuae/Falcon3-10B-Base](https://huggingface.co/tiiuae/Falcon3-10B-Base) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `4579272` to use AutoGPTQ format ## How To Use ### INT4 Inference(CPU/HPU/CUDA) ```python from auto_round import AutoRoundConfig ##must import for auto_round format from transformers import AutoModelForCausalLM, AutoTokenizer quantized_model_dir = "OPEA/Falcon3-10B-Base-int4-sym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, device_map="auto" ## revision="4579272" ##AutoGPTQ format ) text = "How many r in strawberry? The answer is " inputs = tokenizer(text, return_tensors="pt", return_token_type_ids=False).to(model.device) print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0])) text = "How many r in strawberry? The answer is" ##INT4: """How many r in strawberry? The answer is 2. ### Additional Questions and Answers #### 11. **How many r in strawberry?** **Answer:** The word "strawberry" contains 2 'r's. #### """ ##BF16: """ How many r in strawberry? The ansnwer is 2. ### 10. **How many r in strawberry?** **Question:** How many times does the letter 'r' appear in the word "strawberry"? **Answer:** The letter 'r **Answer:** The answer to the riddle""" """ text = "Which number is larger, 9.8 or 9.11? The answer is" ##INT4 """Which number is larger, 9.8 or 9.11? The answer is 9.8. #### 10. **What is the smallest number in the set {1.2, 1.02, 1.22, 1.002}?** """ ##BF16: """Which number is larger, 9.8 or 9.11? The answer is 9.8. #### Question 2: **How do you compare the numbers 12.34 and 12.345?** **Answer:** To compare 12.34""" text = "Once upon a time," ##INT4: """Once upon a time, in a small town named Harmonyville, lived two best friends - Mia and Ben. They were both eight years old and loved exploring the world around them. One sunny afternoon, while playing near the park, they found a mysterious box with a note """ ##BF16: """Once upon a time, in a small town named Harmonyville, there lived two best friends - Timmy the Turtle and Sally the Squirrel. They loved exploring their beautiful forest home together, discovering new things every day. One sunny afternoon, they stumbled upon a mysterious cave filled with """ text = "There is a girl who likes adventure," ##INT4: """There is a girl who likes adventure, and she loves to explore new places. One day, she decided to go on a trip to a faraway land called "The Land of the Sun." She packed her bag with everything she needed, including her favorite book about the sun. """ ##BF16: """There is a girl who likes adventure, and she loves to explore new places. One day, she decided to go on a trip to a beautiful country called Italy. She wanted to see all the famous landmarks and try the delicious Italian food. """ ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --model "OPEA/Falcon3-10B-Base-int4-sym-inc" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu ``` | Metric | BF16 | INT4 | | ------------------------- | ----------------- | ----------------- | | Avg.13 | 0.6151 | 0.6092 | | Avg.10 | 0.64113 | 0.63584 | | leaderboard_mmlu_pro | 0.4238 | 0.4156 | | leaderboard_ifeval | (0.4149+0.2939)/2 | (0.4233+0.2828)/2 | | gsm8k(5shot) strict match | 0.8067 | 0.7923 | | mmlu | 0.7069 | 0.6930 | | lambada_openai | 0.6998 | 0.7025 | | hellaswag | 0.5873 | 0.5832 | | winogrande | 0.7380 | 0.7293 | | piqa | 0.7884 | 0.7889 | | truthfulqa_mc1 | 0.3427 | 0.3452 | | openbookqa | 0.3400 | 0.3320 | | boolq | 0.8232 | 0.8116 | | arc_easy | 0.8312 | 0.8258 | | arc_challenge | 0.5538 | 0.5469 | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model tiiuae/Falcon3-10B-Base \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --model_dtype 'float16' \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Phi-3.5-vision-instruct-int4-sym-inc
OPEA
2025-04-30T04:08:26Z
17
0
null
[ "pytorch", "phi3_v", "custom_code", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:microsoft/Phi-3.5-vision-instruct", "base_model:quantized:microsoft/Phi-3.5-vision-instruct", "license:mit", "4-bit", "auto-round", "region:us" ]
null
2024-11-29T06:06:55Z
--- license: mit datasets: - NeelNanda/pile-10k base_model: - microsoft/Phi-3.5-vision-instruct --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [microsoft/Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision="0f977aa" to use AutoGPTQ format. ## How To Use ### Requirements The current `transformers` version can be verified with: `pip list | grep transformers`. Examples of required packages: ``` flash_attn==2.5.8 numpy==1.24.4 Pillow==10.3.0 Requests==2.31.0 torch==2.3.0 torchvision==0.18.0 transformers==4.43.0 accelerate==0.30.0 ``` ### INT4 Inference ```python from auto_round import AutoRoundConfig ##must import for auto-round format import requests from PIL import Image from transformers import AutoModelForCausalLM, AutoTokenizer, AutoProcessor model_id = "OPEA/Phi-3.5-vision-instruct-int4-sym-inc" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", trust_remote_code=True, torch_dtype="auto", ##revision="0f977aa" ##AutoGPTQ format ) processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True, num_crops=4 ) image_url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" content = "Describe this image." messages = [ {"role": "user", "content": "<|image_1|>\n"+content}, ] prompt = processor.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs = Image.open(requests.get(image_url, stream=True).raw) inputs = processor(prompt, image_inputs, return_tensors="pt").to(model.device) generation_args = { "max_new_tokens": 1000, "temperature": 0.0, "do_sample": False, } generate_ids = model.generate(**inputs, eos_token_id=processor.tokenizer.eos_token_id, **generation_args ) # remove input tokens generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:] response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(response) ##INT4: ## The image captures a serene beach scene at sunset with a person and a dog. The person is seated on the sand, reading a book, while the dog, wearing a harness, sits attentively beside them. The sun is low on the horizon, casting a warm glow and long shadows on the sand. The ocean is calm, and the sky is clear, suggesting a peaceful end to the day. ##BF16: ## The image shows a person sitting on a sandy beach with a dog. The person is wearing a plaid shirt and is holding a book, while the dog is sitting next to them, looking at the book. The beach is near the ocean, and the sun is low in the sky, suggesting it is either sunrise or sunset. The sky is clear, and the overall atmosphere is calm and serene. image_url = "http://images.cocodataset.org/train2017/000000411975.jpg" content = "How many people are there on the baseball field in the image?" ##INT4: ## There are three people on the baseball field in the image. ##BF16: ## There are three people on the baseball field in the image. image_url = "https://intelcorp.scene7.com/is/image/intelcorp/processor-overview-framed-badge:1920-1080?wid=480&hei=270" content = "This image represents which company?" ##INT4: ## The image represents the company Intel, as indicated by the text 'intel INSIDE'. ##BF16: ## The image represents the company Intel, as indicated by the logo and the text 'INSIDE'. ``` ## Evaluation the model pip3 install git+https://github.com/open-compass/VLMEvalKit.git@7de2dcb. The evaluation process may encounter errors that require changing model backend or evaluation code. Detailed instructions will be provided in a future update ```bash auto-round-mllm --eval --model OPEA/Phi-3.5-vision-instruct-int4-sym-inc --tasks MMBench_DEV_EN_V11,ScienceQA_VAL,TextVQA_VAL,POPE --output_dir "./eval_result" ``` |Metric |16bits|Pile Calib INT4 | Llava Calib INT4 | |-------------------|:------|:------|:------| |avg |77.64 |77.14 |76.87| |MMBench_DEV_EN_V11 |71.83 |71.36 |70.90| |ScienceQA_VAL |90.56 |89.75 |89.13| |TextVQA_VAL |65.36 |64.77 |64.66| |POPE |82.82 |82.67 |82.80| ### Generate the model Here is the sample command to reproduce the model. ```bash pip install auto-round auto-round-mllm \ --model microsoft/Phi-3.5-vision-instruct \ --device 0 \ --group_size 128 \ --bits 4 \ --iters 1000 \ --nsample 512 \ --seqlen 2048 \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Llama-3.3-70B-Instruct-int4-sym-inc
OPEA
2025-04-30T04:07:49Z
4
0
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "4-bit", "auto-round", "region:us" ]
null
2024-12-10T06:23:46Z
--- license: llama3.3 datasets: - NeelNanda/pile-10k base_model: - meta-llama/Llama-3.3-70B-Instruct --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision `12cbcc0` to use AutoGPTQ format ## How To Use ### Inference on CPU/HPU/CUDA HPU: docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built). ```python from auto_round import AutoHfQuantizer ##must import for auto-round format import torch from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/Llama-3.3-70B-Instruct-int4-sym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype=torch.float16, device_map="auto", ##revision="12cbcc0", ##AutoGPTQ format ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=200, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ##INT4: ## That sounds like the start of an exciting story. What kind of adventures does she like? Is she into hiking, traveling, trying new foods, or something else? Tell me more about her! ##BF16: ## That sounds like the start of an exciting story. The girl who likes adventure, let's call her Alex, is probably always looking for her next thrill. She might enjoy activities like hiking, rock climbing, or exploring new places. Perhaps she's always been drawn to the unknown and loves to challenge herself to try new things. prompt = "Which one is larger, 9.11 or 9.8" ##INT4: ## 9.11 is larger than 9.8. ##BF16: ## 9.11 is larger than 9.8. prompt = "How many r in strawberry." ##INT4: ## There are 2 R's in the word "strawberry". ##BF16: ## There are 2 R's in the word "strawberry". prompt = "Once upon a time," ##INT4: ## ...in a far-off kingdom, where the sun dipped into the horizon and painted the sky with hues of crimson and gold, there lived a young adventurer named Sophia. She was a curious and brave soul, with a heart full of wonder and a mind full of questions. Sophia lived in a small village on the outskirts of the kingdom, surrounded by rolling hills and dense forests that whispered secrets to the wind. ## One day, Sophia stumbled upon an ancient map that had been hidden away in the village library. The map was worn and torn, but it seemed to point to a mysterious location deep within the forest. The map was labeled with a single word: "Eldrador". ## Sophia felt an inexplicable pull towards the map and the secrets it held. She decided to embark on a journey to uncover the truth about Eldrador, and to explore the unknown lands that lay beyond the edge of the kingdom. ## As she set out on her quest, Sophia encountered a wise old wizard named Zephyr, ##BF16: ## ...in a far-off kingdom, where the sun dipped into the horizon and painted the sky with hues of crimson and gold, there lived a young adventurer named Sophia. She had hair as black as the night and eyes as blue as the clearest summer sky. Sophia was known throughout the land for her bravery, kindness, and insatiable curiosity. ## What would you like to happen next in the story? Would you like Sophia to: ## A) Embark on a quest to find a legendary treasure ## B) Encounter a mysterious stranger with a hidden agenda ## C) Discover a magical forest filled with ancient secrets ## D) Something entirely different (please specify) ## Choose your response to progress the story! ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --eval --model "OPEA/Llama-3.3-70B-Instruct-int4-sym-inc" --eval_bs 16 --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k ``` | Metric | BF16 | INT4 | | --------------------------- | ------------------------ | ------------------------ | | avg | 0.7023 | 0.7033 | | leaderboard_mmlu_pro 5shot | 0.5484 | 0.5328 | | leaderboard_ifeval | 0.6661=(0.7110+0.6211)/2 | 0.7132=(0.7554+0.6710)/2 | | mmlu | 0.8195 | 0.8164 | | lambada_openai | 0.7528 | 0.7599 | | hellaswag | 0.6575 | 0.6540 | | winogrande | 0.7869 | 0.7932 | | piqa | 0.8303 | 0.8254 | | truthfulqa_mc1 | 0.4284 | 0.4272 | | openbookqa | 0.3720 | 0.3540 | | boolq | 0.8865 | 0.8826 | | arc_easy | 0.8624 | 0.8577 | | arc_challenge | 0.6109 | 0.6015 | | gsm8k(5shot) strict match | 0.9083 | 0.9249 | ## Generate the model Here is the sample command to reproduce the model. ```bash auto-round \ --model meta-llama/Llama-3.3-70B-Instruct \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --low_gpu_mem_usage \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Qwen2.5-72B-Instruct-int4-sym-inc
OPEA
2025-04-30T04:07:24Z
7
0
null
[ "safetensors", "qwen2", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:Qwen/Qwen2.5-72B-Instruct", "base_model:quantized:Qwen/Qwen2.5-72B-Instruct", "4-bit", "auto-round", "region:us" ]
null
2024-11-29T07:14:33Z
--- datasets: - NeelNanda/pile-10k base_model: - Qwen/Qwen2.5-72B-Instruct --- ## Model Details This model is an int4 model with group_size 128 and and symmetric quantization of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with `revision="b162b49"` to use AutoGPTQ format. ## How To Use ### INT4 Inference(CPU/HPU/CUDA) CPU requires auto-round version>0.3.1 ```python from auto_round import AutoRoundConfig ##must import for auto-round format from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/Qwen2.5-72B-Instruct-int4-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype='auto', device_map="auto", ##revision="b162b49" ##AutoGPTQ format ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=200, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) prompt = "There is a girl who likes adventure," ##INT4: """That's great to hear! A love for adventure can lead to so many exciting experiences and personal growth. What kind of adventures does she enjoy? Is it exploring new places, trying out thrilling activities, or perhaps diving into new hobbies and challenges? Knowing more about her interests could help in suggesting fun ideas or planning future adventures. """ ##BF16: """That sounds like a great quality to have! A love for adventure can lead to exciting experiences and personal growth. What kind of adventures does she enjoy? Does she like exploring new places, trying new activities, or seeking out thrilling experiences? Knowing more about her interests can help suggest specific ideas or activities that might appeal to her adventurous spirit. """ prompt = "9.11和9.8哪个数字大" #INT4: """要比较9.11和9.8的大小,可以按照以下步骤进行: 1. **比较整数部分**:两个数字的整数部分都是9,所以需要进一步比较小数部分。 2. **比较小数部分**: - 9.11的小数部分是0.11 - 9.8的小数部分是0.8 3. **比较0.11和0.8**: - 0.11可以写成0.110 - 0.8可以写成0.800 4. **逐位比较**: - 第一位:1 < 8,所以0.110 < 0.800 因此,9.11 < 9.8。 结论:9.8比9.11大。""" ##BF16: """比较两个数字 9.11 和 9.8,可以按照以下步骤进行: 1. **整数部分**:两个数字的整数部分都是 9,所以需要比较小数部分。 2. **小数部分**: - 9.11 的小数部分是 0.11 - 9.8 的小数部分是 0.8 3. **比较小数部分**: - 0.11 和 0.8 比较时,0.8 明显大于 0.11。 因此,9.8 大于 9.11。""" prompt = "Once upon a time," ##INT4: """Once upon a time, in a far-off land, there was a kingdom filled with wonder and magic. The kingdom was ruled by a wise and just king who loved his people dearly. In the heart of the kingdom stood a magnificent castle, surrounded by lush forests and rolling hills. The people of the kingdom lived happily, tending to their farms, crafting beautiful goods, and enjoying the simple pleasures of life. However, one day, a great darkness began to spread across the land. A wicked sorcerer had risen from the shadows, seeking to claim the throne for himself and plunge the kingdom into chaos. The king, knowing that he could not face this threat alone, called upon the bravest and most skilled heroes from all corners of the realm. Among them was a young knight named Sir Cedric, who had earned a reputation for his courage and unwavering sense of justice. Sir Cedric, along with a group of loyal companions, set out on a perilous journey to stop the sor""" ##BF16: """Once upon a time, in a land far, far away, there was a kingdom known for its beauty and prosperity. The kingdom was ruled by a wise and just king who loved his people dearly. In the heart of the kingdom stood a magnificent castle, surrounded by lush gardens and sparkling fountains. The king had a young daughter named Princess Elara, who was as kind and gentle as she was beautiful. She spent her days helping the poor and spreading joy throughout the kingdom. The people adored her, and she was beloved by all. One day, a great challenge arose. A dark forest on the outskirts of the kingdom began to grow wild and dangerous, threatening the safety of the villagers. The king called for a hero to tame the forest and protect his people. Many brave knights and warriors came forward, but none could succeed. Princess Elara, determined to help, decided to venture into the forest herself. Her father was hesitant, but he saw the determination in her eyes and knew""" prompt = "请简短介绍一下阿里巴巴公司" ##INT4: """阿里巴巴集团是一家总部位于中国杭州的全球领先的电子商务和科技公司。它成立于1999年,由马云和他的团队创立。阿里巴巴旗下拥有包括淘宝、天猫、阿里云等在内的多个知名业务平台,涵盖了在线零售、批发贸易、云计算、数字娱乐、金融服务等多个领域。 阿里巴巴的愿景是让世界各地的企业都能够平等地进行贸易,通过技术创新推动数字经济的发展,为社会创造更多的价值。目前,阿里巴巴已经发展成为世界领先的互联网公司之一,业务遍布全球多个国家和地区,服务着数以亿计的用户和商家。""" ##BF16: """阿里巴巴集团是一家总部位于中国杭州的全球领先的电子商务和科技公司。成立于1999年,阿里巴巴最初是一个B2B在线市场,旨在连接中国制造商与全球买家。经过二十多年的发展,阿里巴巴已经发展成为涵盖电子商务、金融、物流、云计算等多个领域的综合性企业集团。 阿里巴巴旗下拥有淘宝网、天猫、菜鸟网络、阿里云等知名品牌,为消费者提供购物、支付、娱乐等多元化服务,同时也为企业提供营销、销售、物流和技术支持等全方位解决方案。此外,阿里巴巴还积极投资和孵化创新项目,推动数字经济的发展。 阿里巴巴始终秉持“让天下没有难做的生意”的使命,致力于通过技术创新促进全球经济的可持续发展。""" ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --model "OPEA/Qwen2.5-72B-Instruct-int4-inc" --eval --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid ``` | Metric | BF16 | INT4 | | :----------------------------------------- | :----: | :----: | | Avg | 0.7413 | 0.7448 | | leaderboard_mmlu_pro 5 shots | 0.5919 | 0.5864 | | leaderboard_ifeval inst_level_strict_acc | 0.7770 | 0.7866 | | leaderboard_ifeval prompt_level_strict_acc | 0.6858 | 0.6932 | | mmlu | 0.8334 | 0.8308 | | cmmlu | 0.8727 | 0.8673 | | ceval-valid | 0.8975 | 0.8960 | | gsm8k 5 shots | 0.9037 | 0.9098 | | lambada_openai | 0.7518 | 0.7563 | | hellaswag | 0.7031 | 0.7014 | | winogrande | 0.7601 | 0.7687 | | piqa | 0.8313 | 0.8232 | | truthfulqa_mc1 | 0.5239 | 0.5263 | | openbookqa | 0.3860 | 0.3820 | | boolq | 0.9049 | 0.9046 | | arc_easy | 0.8632 | 0.8611 | | arc_challenge | 0.6135 | 0.6237 | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model Qwen/Qwen2.5-72B-Instruct \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --model_dtype "fp16" \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/DeepSeek-V2.5-1210-int4-sym-inc
OPEA
2025-04-30T04:07:11Z
25
8
null
[ "safetensors", "deepseek_v2", "custom_code", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:deepseek-ai/DeepSeek-V2.5-1210", "base_model:quantized:deepseek-ai/DeepSeek-V2.5-1210", "4-bit", "auto-round", "region:us" ]
null
2024-12-30T04:41:03Z
--- datasets: - NeelNanda/pile-10k base_model: - deepseek-ai/DeepSeek-V2.5-1210 --- ## Model Details This model is an int4 model with group_size 128 and and symmetric quantization of [deepseek-ai/DeepSeek-V2.5-1210](https://huggingface.co/deepseek-ai/DeepSeek-V2.5-1210) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Load the model with `revision="6d3d2cf"` to use AutoGPTQ format. **Please note that loading the model in Transformers can be quite slow. Consider using an alternative serving framework for better performance.** For other serving frameworks, the autogptq format is required. You can run the following command to fetch the model: ```bash huggingface-cli download OPEA/DeepSeek-V2.5-1210-int4-sym-inc --revision 6d3d2cf ``` Please follow the license of the origin model. ## How To Use ### INT4 Inference(CPU/CUDA) ````python from auto_round import AutoRoundConfig ##must import for auto-round format from transformers import AutoModelForCausalLM, AutoTokenizer,GenerationConfig import torch quantized_model_dir="OPEA/DeepSeek-V2.5-1210-int4-sym-inc" max_memory = {i: "75GB" for i in range(2)} model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype=torch.float16, device_map="sequential", attn_implementation="eager", trust_remote_code=True, max_memory=max_memory, ##revision="6d3d2cf" ##AutoGPTQ format ) model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir,trust_remote_code=True) prompt = "There is a girl who likes adventure," messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) prompt = "strawberry中有几个r?" ##INT4 """### 第一步:理解问题 首先,我需要明确问题的含义。问题是:“strawberry中有几个r?” 这里的“strawberry”指的是一种水果,而“r”则是英文字母。问题实际上是在问,单词“strawberry”中包含了多少个字母“r”。 ### 第二步:分析单词结构 为了回答这个问题,我需要仔细分析单词“strawberry”的结构,找出其中所有的字母“r”。 单词“strawberry”拼写如下: ``` s t r a w b e r r y ``` ### 第三步:逐个字母检查 接下来,我将逐个字母检查,看看哪些字母是“r”。 1. **s** - 不是“r” 2. **t** - 不是“r” 3. **r** - 是“r” 4. **a** - 不是“r” 5. **w** - 不是“r” 6. **b** - 不是“r” 7. **e** - 不是“r” 8. **r** - 是“r” 9. **r** - 是“r” 10. **y** - 不是“r” ### 第四步:统计“r”的数量 通过上述检查,我发现单词“strawberry”中共有三个字母“r”。 ### 第五步:验证结果 为了确保我的答案正确,我再次检查了单词的拼写,并重新数了一遍“r”的数量,确认确实有三个“r”。 ### 最终答案 综上所述,单词“strawberry”中共有**三个**字母“r”。""" prompt = "9.11和9.8哪个数字大" ##INT4 """要比较 **9.11** 和 **9.8** 的大小,可以按照以下步骤进行: 1. **比较整数部分**: - 两个数的整数部分都是 **9**,因此需要比较小数部分。 2. **比较小数部分**: - **9.11** 的小数部分是 **0.11** - **9.8** 的小数部分是 **0.8** 3. **比较小数部分的大小**: - **0.8** 大于 **0.11** 4. **得出结论**: - 由于小数部分 **0.8** 大于 **0.11**,所以 **9.8** 大于 **9.11**。 最终答案是: \[ \boxed{9.8} \]""" prompt = "Please give a brief introduction of DeepSeek company." ##INT4:"""DeepSeek Artificial Intelligence Co., Ltd. (referred to as "DeepSeek" or "深度求索") , founded in 2023, is a Chinese company dedicated to making AGI a reality.""" prompt = "There is a girl who likes adventure," ##INT4: """It sounds like you're setting the stage for a story or a character introduction! Here's a little continuation to spark your imagination: --- There is a girl who likes adventure. Her name is Lily, and her eyes sparkle with curiosity whenever she hears the word "explore." Whether it's hiking through dense forests, diving into the mysteries of the ocean, or wandering through bustling city streets in search of hidden treasures, Lily is always ready for the next thrill. Her backpack is never without a map, a compass, and a notebook where she scribbles down her discoveries. She believes that every adventure, no matter how small, holds a story waiting to be told. Her friends often joke that she has a sixth sense for finding the most exciting paths, but Lily knows it's just her unwavering determination to seek out the unknown. One day, while exploring an old, abandoned library, Lily stumbles upon a dusty, leather-bound book. As she flips through its pages, she discovers a series of cryptic clues leading to a legendary treasure hidden deep within the mountains. Without hesitation, she packs her bag and sets off on her greatest adventure yet, ready to uncover the secrets that have eluded others for centuries. --- Feel free to expand on this or let me know if you'd like to explore a different direction!""" ```` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --model "OPEA/DeepSeek-V2.5-1210-int4-sym-inc" --eval --eval_bs 8 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid --devices 0,1,2,3 ``` | Metric | BF16 | INT4 | | :----------------------------------------- | :--: | :----: | | Avg | | | | leaderboard_mmlu_pro 5 shots | | 0.521 | | leaderboard_ifeval inst_level_strict_acc | | | | leaderboard_ifeval prompt_level_strict_acc | | | | mmlu | | 0.7690 | | cmmlu | | | | ceval-valid | | | | gsm8k 5 shots | | | | lambada_openai | | | | hellaswag | | | | winogrande | | | | piqa | | | | truthfulqa_mc1 | | | | openbookqa | | | | boolq | | | | arc_easy | | | | arc_challenge | | | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model deepseek-ai/DeepSeek-V2.5-1210 \ --device 0 \ --disable_eval \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/Phi-3.5-vision-instruct-qvision-int4-sym-inc
OPEA
2025-04-30T04:06:53Z
7
1
null
[ "pytorch", "phi3_v", "custom_code", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:microsoft/Phi-3.5-vision-instruct", "base_model:quantized:microsoft/Phi-3.5-vision-instruct", "license:mit", "4-bit", "auto-round", "region:us" ]
null
2024-12-04T14:51:03Z
--- license: mit datasets: - NeelNanda/pile-10k base_model: - microsoft/Phi-3.5-vision-instruct --- ## Model Details This model is an int4 model(The vision module has also been quantized) with group_size 128 and symmetric quantization of [microsoft/Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). ## How To Use ### Requirements The current `transformers` version can be verified with: `pip list | grep transformers`. Examples of required packages: ``` flash_attn==2.5.8 numpy==1.24.4 Pillow==10.3.0 Requests==2.31.0 torch==2.3.0 torchvision==0.18.0 transformers==4.43.0 accelerate==0.30.0 ``` ### INT4 Inference ```python from auto_round import AutoRoundConfig ##must import for auto-round format import requests from PIL import Image from transformers import AutoModelForCausalLM, AutoTokenizer, AutoProcessor model_id = "OPEA/Phi-3.5-vision-instruct-qvision-int4-sym-inc" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", trust_remote_code=True, torch_dtype="auto" ) processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True, num_crops=4 ) image_url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" content = "Describe this image." messages = [ {"role": "user", "content": "<|image_1|>\n"+content}, ] prompt = processor.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs = Image.open(requests.get(image_url, stream=True).raw) inputs = processor(prompt, image_inputs, return_tensors="pt").to(model.device) generation_args = { "max_new_tokens": 1000, "temperature": 0.0, "do_sample": False, } generate_ids = model.generate(**inputs, eos_token_id=processor.tokenizer.eos_token_id, **generation_args ) # remove input tokens generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:] response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(response) ##INT4: ## The image shows a person sitting on a sandy beach with a dog. The person is wearing a plaid shirt and is holding a book, while the dog is sitting next to them, looking up at the person. The beach is near the ocean, and the sun is setting, casting a warm glow over the scene. ##BF16: ## The image shows a person sitting on a sandy beach with a dog. The person is wearing a plaid shirt and is holding a book, while the dog is sitting next to them, looking at the book. The beach is near the ocean, and the sun is low in the sky, suggesting it is either sunrise or sunset. The sky is clear, and the overall atmosphere is calm and serene. image_url = "http://images.cocodataset.org/train2017/000000411975.jpg" content = "How many people are there on the baseball field in the image?" ##INT4: ## There are three people on the baseball field in the image. ##BF16: ## There are three people on the baseball field in the image. image_url = "https://intelcorp.scene7.com/is/image/intelcorp/processor-overview-framed-badge:1920-1080?wid=480&hei=270" content = "This image represents which company?" ##INT4: ## The image represents the company Intel, as indicated by the logo and the text 'INSIDE'. ##BF16: ## The image represents the company Intel, as indicated by the logo and the text 'INSIDE'. ``` ## Evaluation the model pip3 install git+https://github.com/open-compass/VLMEvalKit.git@7de2dcb. The evaluation process may encounter errors that require changing model backend or evaluation code. Detailed instructions will be provided in a future update ```bash auto-round-mllm --eval --model OPEA/Phi-3.5-vision-instruct-qvision-int4-sym-inc --tasks MMBench_DEV_EN_V11,ScienceQA_VAL,TextVQA_VAL,POPE --output_dir "./eval_result" ``` |Metric |16bits|Llava Calib INT4 | |:-------------------|:------|:------| |avg |77.64 |76.99 | |MMBench_DEV_EN_V11 |71.83 |71.05 | |ScienceQA_VAL |90.56 |89.75 | |TextVQA_VAL |65.36 |63.83 | |POPE |82.82 |83.33 | ### Generate the model Here is the sample command to reproduce the model. ```bash pip install auto-round auto-round-mllm \ --model microsoft/Phi-3.5-vision-instruct \ --device 0 \ --group_size 128 \ --bits 4 \ --iters 200 \ --nsample 128 \ --seqlen 2048 \ --quant_nontext_module \ --format 'auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
aleegis/c0a6afcf-0945-487b-8de8-fcbf2ba6f115
aleegis
2025-04-30T04:02:52Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:openlm-research/open_llama_3b", "base_model:adapter:openlm-research/open_llama_3b", "license:apache-2.0", "region:us" ]
null
2025-04-30T02:28:16Z
--- library_name: peft license: apache-2.0 base_model: openlm-research/open_llama_3b tags: - axolotl - generated_from_trainer model-index: - name: c0a6afcf-0945-487b-8de8-fcbf2ba6f115 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: openlm-research/open_llama_3b bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - 6d08c26eca54be00_train_data.json ds_type: json format: custom path: /workspace/input_data/6d08c26eca54be00_train_data.json type: field_input: code field_instruction: prompt field_output: generation format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: false group_by_length: false hub_model_id: aleegis/c0a6afcf-0945-487b-8de8-fcbf2ba6f115 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: null lora_alpha: 32 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true loraplus_lr_embedding: 1.0e-06 loraplus_lr_ratio: 16 lr_scheduler: cosine max_grad_norm: 1 max_steps: 1500 micro_batch_size: 2 mlflow_experiment_name: /tmp/6d08c26eca54be00_train_data.json model_type: AutoModelForCausalLM num_epochs: 200 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null save_total_limit: 10 saves_per_epoch: 0 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: online wandb_name: 04a05f57-bd95-48d1-acfb-c7dc4b68222e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 04a05f57-bd95-48d1-acfb-c7dc4b68222e warmup_steps: 100 weight_decay: 0 xformers_attention: null ``` </details><br> # c0a6afcf-0945-487b-8de8-fcbf2ba6f115 This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
OPEA/Qwen2.5-0.5B-Instruct-int4-sym-inc
OPEA
2025-04-30T04:02:40Z
26
0
null
[ "safetensors", "qwen2", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "4-bit", "auto-round", "region:us" ]
null
2024-11-29T08:19:40Z
--- license: apache-2.0 datasets: - NeelNanda/pile-10k base_model: - Qwen/Qwen2.5-0.5B-Instruct language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with `revision="7cac2d1"` to use AutoGPTQ format ## How To Use ### INT4 Inference(CPU/HPU/CUDA) CPU requires auto-round version>0.3.1 ```python from auto_round import AutoRoundConfig ##must import for auto-round format from transformers import AutoModelForCausalLM,AutoTokenizer quantized_model_dir = "OPEA/Qwen2.5-0.5B-Instruct-int4-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype='auto', device_map="auto", ##revision="7cac2d1" ##AutoGPTQ format ) ##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU ##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU ##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU prompt = "There is a girl who likes adventure," messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=200, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) prompt = "There is a girl who likes adventure," ## INT4: """That's great to hear! What kind of adventure does the girl like? Is there anything specific she enjoys doing or exploring?""" ## BF16: """That's great! What kind of adventure does she like?""" prompt = "9.11和9.8哪个数字大" #INT4: """ 要比较9.11和9.8的大小,我们可以按照以下步骤进行: 1. 首先,将两个数都转换为相同的小数形式。这里我们使用小数点前的零来方便比较。 9.11 = 9.1100 (保留两位小数) 9.8 = 9.8000 (保留两位小数) 2. 现在,比较这两个小数: - 第一位:9 和 9 相等。 - 第二位:第一位是相同的,都是1。 - 第三位:第一个数是1,第二个数是8,所以8 > 1。 因此,9.8大于9.11。 最终答案:9.8更大。 """ ##BF16: """ 要比较9.11和9.8的大小,我们可以按照以下步骤进行: 1. **直接比较**:将两个数相减: \[ 9.11 - 9.8 = -0.69 \] 2. **理解结果**:-0.69表示的是一个负数。因为9.11比9.8小。 因此,9.8比9.11大。 """ prompt = "Once upon a time," ##INT4: """I'm sorry, but I don't understand what you're asking me to do or what information you want me to provide. Could you please clarify your question or provide more context? I'd be happy to help if you can give me all the information you need.""" ##BF16: """once upon a time, there was a young girl named Lily who lived in a small village nestled between two mountains. She had always been fascinated by the natural world and dreamed of exploring it further. One day, while wandering through the forest, she stumbled upon an old, mysterious book that seemed to have been written on its pages. As she read, she realized that the book contained secrets about the hidden treasures of the earth. Lily was determined to uncover these secrets and become a true explorer. She spent hours poring over the pages, trying to understand what the author was trying to tell her. Finally, after many days of research and study, Lily discovered the location of the treasure. It lay deep within the heart of the mountain range, guarded by powerful forces that only those with the right knowledge could reach. With great excitement, Lily set out on her journey to find the treasure. She traveled for weeks, crossing treacherous terrain and battling fierce beasts along the way. But even as she""" prompt = "请简短介绍一下阿里巴巴公司" ##INT4: """阿里巴巴集团是全球领先的电子商务和云计算服务提供商,成立于1999年。该公司总部位于中国杭州,并在多个国家和地区设有办事处和运营中心。阿里巴巴集团的业务包括在线零售、移动支付、云计算、人工智能等。阿里巴巴集团是中国最大的电子商务平台之一,也是全球最大的电商平台之一。阿里巴巴集团还拥有众多子公司和品牌,如淘宝、天猫、菜鸟网络等。阿里巴巴集团在全球范围内拥有超过20亿活跃用户,每年销售额超过3500亿美元。阿里巴巴集团致力于通过创新和智能化技术推动商业变革,为消费者提供更便捷、更个性化的购物体验。""" ##BF16: """阿里巴巴集团是全球最大的电子商务平台之一,成立于1999年。该公司提供包括淘宝、天猫、阿里云等在内的众多产品和服务,是中国乃至全球领先的互联网企业之一。""" ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --model "OPEA/Qwen2.5-0.5B-Instruct-int4-inc" --eval --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid ``` | Metric | BF16 | INT4 | | :----------------------------------------- | :----: | :----: | | Avg | 0.4229 | 0.4124 | | leaderboard_mmlu_pro 5 shots | 0.1877 | 0.1678 | | leaderboard_ifeval inst_level_strict_acc | 0.3501 | 0.3441 | | leaderboard_ifeval prompt_level_strict_acc | 0.2107 | 0.2218 | | mmlu | 0.4582 | 0.4434 | | cmmlu | 0.5033 | 0.4542 | | ceval-valid | 0.5327 | 0.4918 | | gsm8k 5 shots | 0.2146 | 0.2267 | | lambada_openai | 0.4968 | 0.4692 | | hellaswag | 0.4062 | 0.3927 | | winogrande | 0.5541 | 0.5675 | | piqa | 0.7051 | 0.7035 | | truthfulqa_mc1 | 0.2693 | 0.2815 | | openbookqa | 0.2400 | 0.2200 | | boolq | 0.6783 | 0.6471 | | arc_easy | 0.6566 | 0.6595 | | arc_challenge | 0.3020 | 0.3072 | ### Generate the model Here is the sample command to generate the model. We observed a larger accuracy drop in Chinese tasks and recommend using a high-quality Chinese dataset for calibration or smaller group_size like 32. ```bash auto-round \ --model Qwen/Qwen2.5-0.5B-Instruct \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --model_dtype "fp16" \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF
mradermacher
2025-04-30T03:59:01Z
188
1
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:huihui-ai/Qwen2.5-1.5B-Instruct-abliterated", "base_model:quantized:huihui-ai/Qwen2.5-1.5B-Instruct-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-04T19:21:34Z
--- base_model: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: apache-2.0 license_link: https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated/blob/main/LICENSE quantized_by: mradermacher tags: - chat - abliterated - uncensored --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.0 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-1.5B-Instruct-abliterated.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
LNGYEYXR/Llama-3.1-8B-lora-pt-new
LNGYEYXR
2025-04-30T03:56:51Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T03:53:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Em3rzone/Em3rzone
Em3rzone
2025-04-30T03:54:48Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-04-30T03:54:45Z
--- license: artistic-2.0 ---
Xinsssss/SmolLM2-FT-MyDataset
Xinsssss
2025-04-30T03:51:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T03:50:54Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Xinsssss/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/xsshen2-university-of-melbourne/huggingface/runs/uoevsx5k) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
royiidfk/fbgfgb
royiidfk
2025-04-30T03:51:10Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-04-30T03:51:10Z
--- license: bigcode-openrail-m ---
hernanfaustino/megan-hf
hernanfaustino
2025-04-30T03:50:26Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-30T03:26:04Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MEGAN_HF --- # Megan Hf <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MEGAN_HF` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MEGAN_HF", "lora_weights": "https://huggingface.co/hernanfaustino/megan-hf/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('hernanfaustino/megan-hf', weight_name='lora.safetensors') image = pipeline('MEGAN_HF').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/hernanfaustino/megan-hf/discussions) to add images that show off what you’ve made with this LoRA.
OPEA/Falcon3-10B-Base-int4-sym-awq-inc
OPEA
2025-04-30T03:50:01Z
0
0
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:tiiuae/Falcon3-10B-Base", "base_model:quantized:tiiuae/Falcon3-10B-Base", "4-bit", "awq", "region:us" ]
null
2024-12-13T05:55:48Z
--- datasets: - NeelNanda/pile-10k base_model: - tiiuae/Falcon3-10B-Base --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [Falcon3-10B-Base](https://huggingface.co/tiiuae/Falcon3-10B-Base) generated by [intel/auto-round](https://github.com/intel/auto-round). ## How To Use ### INT4 Inference(CPU/HPU/CUDA) ```python from auto_round import AutoRoundConfig ##must import for auto_round format from transformers import AutoModelForCausalLM, AutoTokenizer quantized_model_dir = "OPEA/falcon3-10B-int4-sym-inc" tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir) model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, device_map="auto", ) text = "How many r in strawberry? The answer is " inputs = tokenizer(text, return_tensors="pt", return_token_type_ids=False).to(model.device) print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0])) text = "How many r in strawberry? The answer is" ##INT4: """How many r in strawberry? The answer is 2. ### Additional Questions and Answers #### 11. **How many r in strawberry?** **Answer:** The word "strawberry" contains 2 'r's. #### """ ##BF16: """ How many r in strawberry? The ansnwer is 2. ### 10. **How many r in strawberry?** **Question:** How many times does the letter 'r' appear in the word "strawberry"? **Answer:** The letter 'r **Answer:** The answer to the riddle""" """ text = "Which number is larger, 9.8 or 9.11? The answer is" ##INT4 """Which number is larger, 9.8 or 9.11? The answer is 9.8. #### 10. **What is the smallest number in the set {1.2, 1.02, 1.22, 1.002}?** """ ##BF16: """Which number is larger, 9.8 or 9.11? The answer is 9.8. #### Question 2: **How do you compare the numbers 12.34 and 12.345?** **Answer:** To compare 12.34""" text = "Once upon a time," ##INT4: """Once upon a time, in a small town named Harmonyville, lived two best friends - Mia and Ben. They were both eight years old and loved exploring the world around them. One sunny afternoon, while playing near the park, they found a mysterious box with a note """ ##BF16: """Once upon a time, in a small town named Harmonyville, there lived two best friends - Timmy the Turtle and Sally the Squirrel. They loved exploring their beautiful forest home together, discovering new things every day. One sunny afternoon, they stumbled upon a mysterious cave filled with """ text = "There is a girl who likes adventure," ##INT4: """There is a girl who likes adventure, and she loves to explore new places. One day, she decided to go on a trip to a faraway land called "The Land of the Sun." She packed her bag with everything she needed, including her favorite book about the sun. """ ##BF16: """There is a girl who likes adventure, and she loves to explore new places. One day, she decided to go on a trip to a beautiful country called Italy. She wanted to see all the famous landmarks and try the delicious Italian food. """ ``` ### Evaluate the model pip3 install lm-eval==0.4.5 ```bash auto-round --model "OPEA/falcon3-10B-int4-sym-inc" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu ``` | Metric | BF16 | INT4 | | ------------------------- | ----------------- | ----------------- | | Avg.13 | 0.6151 | 0.6092 | | Avg.10 | 0.64113 | 0.63584 | | leaderboard_mmlu_pro | 0.4238 | 0.4156 | | leaderboard_ifeval | (0.4149+0.2939)/2 | (0.4233+0.2828)/2 | | gsm8k(5shot) strict match | 0.8067 | 0.7923 | | mmlu | 0.7069 | 0.6930 | | lambada_openai | 0.6998 | 0.7025 | | hellaswag | 0.5873 | 0.5832 | | winogrande | 0.7380 | 0.7293 | | piqa | 0.7884 | 0.7889 | | truthfulqa_mc1 | 0.3427 | 0.3452 | | openbookqa | 0.3400 | 0.3320 | | boolq | 0.8232 | 0.8116 | | arc_easy | 0.8312 | 0.8258 | | arc_challenge | 0.5538 | 0.5469 | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model tiiuae/Falcon3-10B-Base \ --device 0 \ --group_size 128 \ --nsamples 512 \ --bits 4 \ --iter 1000 \ --disable_eval \ --model_dtype 'float16' \ --format 'auto_awq,auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
ant-opt/LLMOPT-Qwen2.5-14B
ant-opt
2025-04-30T03:50:00Z
32
4
null
[ "safetensors", "qwen2", "arxiv:2405.13144", "arxiv:2310.06116", "arxiv:2405.17743", "arxiv:2407.09887", "arxiv:2502.11102", "arxiv:2410.13213", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "license:mit", "region:us" ]
null
2025-04-21T05:46:34Z
--- license: mit base_model: - Qwen/Qwen2.5-14B-Instruct --- <h2 align="center">ICLR25 | LLMOPT: Learning to Define and Solve General Optimization Problems from Scratch </h2> <p align="center"> <a href=""><strong>Caigao Jiang</strong></a><sup>*</sup> · <a href=""><strong>Xiang Shu</strong></a><sup>*</sup> · <a href=""><strong>Hong Qian</strong></a><sup>†</sup> · <a href=""><strong>Xingyu Lu</strong></a><sup>†</sup> <br> <a href=""><strong>Jun Zhou</strong></a> · <a href=""><strong>Aimin Zhou</strong></a> · <a href=""><strong>Yang Yu</strong></a> <div align='center'> <sup>*</sup>Equal Contribution, <sup>†</sup>Corresponding Authors. </div> <p align="center"> <b>East China Normal University | Ant Group | Nanjing University </b></p> <p align="center" style="white-space: nowrap;"> <a href="https://openreview.net/pdf?id=9OMvtboTJg" style="display: inline-block;"><img src='https://img.shields.io/badge/Paper-LLMOPT-red'></a> <a href='https://huggingface.co/ant-opt/LLMOPT-Qwen2.5-14B' style="display: inline-block;"><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-yellow'></a> <a href='https://github.com/ant-opt/LLMOPT/tree/main/data/testset' style="display: inline-block;"><img src='https://img.shields.io/badge/Dataset-Testset-blue'></a> <a href='https://github.com/ant-opt/LLMOPT' style="display: inline-block;"><img src='https://img.shields.io/badge/GitHub-Repo-blue'></a> </p> </p> ## 🤖Model Release We release the [LLMOPT-Qwen2.5-14B](https://huggingface.co/ant-opt/LLMOPT-Qwen2.5-14B) model on Hugging Face and conduct comprehensive performance evaluations. We have updated the model evaluation results as shown in the following table, where the original results correspond to Table 1 and Table 2 in the paper. The differences in results stem from two reasons. Firstly, we exclude all Mamo EasyLP and ComplexLP datasets from the training process, reserving them exclusively for the test. Additionally, unlike the version described in our paper which used [Qwen1.5-14B](https://huggingface.co/Qwen/Qwen1.5-14B), this release is fine-tuned from the latest [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) model. The performance metrics for [LLMOPT-Qwen2.5-14B](https://huggingface.co/ant-opt/LLMOPT-Qwen2.5-14B) are as follows: | Dataset | NL4Opt | Mamo Easy | Mamo Complex | NLP4LP | ComplexOR | IndustryOR | ICML Competition | OptiBench | OptMath | AVG | | :-------------------------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | | #Questions | 230 | 652 | 211 | 242 | 18 | 100 | 410 | 605 | 166 | - | | ER with self-correction | 100.00% | 100.00% | 99.05% | 100.00% | 100.00% | 94.00% | 99.66% | 82.31% | 75.30% | 94.48% | | **SA with self-correction** | **97.31%** | **95.31%** | **85.78%** | **86.49%** | **76.47%** | **44.00%** | **95.76%** | **66.44%** | **40.00%** | **76.40%** | | AST with self-correction | 1.38 | 1.13 | 2.13 | 1.50 | 3.46 | 2.14 | 1.47 | 1.54 | 4.06 | 2.09 | | ER w/o self-correction | 97.42% | 98.29% | 77.73% | 97.93% | 88.89% | 61.00% | 93.90% | 73.22% | 31.93% | 80.03% | | SA w/o self-correction | 80.28% | 89.53% | 44.08% | 73.42% | 35.29% | 29.00% | 75.35% | 53.83% | 12.50% | 54.81% | In the experiment, we use three performance metrics to comprehensively evaluate the optimization generalization of the algorithm, namely, **Execution Rate (ER), Solving Accuracy (SA), and Average Solving Times (AST)**. Specifically, **ER** refers to the proportion of solutions whose code can run without any errors and has running results output. **SA** refers to the proportion of solutions that correctly solve the optimization problem, i.e., find the optimal solution. **AST** refers to the average number of times the self-correction process is performed during the test. ## 📊Dataset Release ### Data Structure To facilitate the evaluation, we process all datasets into a unified data structure. Specifically, each dataset is organized in a `jsonl` file, and each line is an independent piece of data. Each data includes four attributes, `question`, `answer`, `ori`, and `index`. The `question` field is a complete string description of the optimization problem, including complete data that can solve a problem. The `answer` field is a `float` type value, which indicates the objective function value corresponding to the optimal solution of the problem, i.e., the ground truth. The `ori` field indicates the source of the problem, that is, the name of the dataset. In order to facilitate statistical results, we use the `index` field to number the data in each dataset. The data are [available](https://github.com/antgroup/LLMOPT/tree/main/data/testset). An example: (The first data of the NL4Opt dataset) ```json { "question": "There has been an oil spill in the ocean and ducks need to be taken to shore to be cleaned either by boat or by canoe. A boat can take 10 ducks per trip while a canoe can take 8 ducks per trip. Since the boats are motor powered, they take 20 minutes per trip while the canoes take 40 minutes per trip. In order to avoid further environmental damage, there can be at most 12 boat trips and at least 60% of the trips should be by canoe. If at least 300 ducks need to be taken to shore, how many of each transportation method should be used to minimize the total amount of time needed to transport the ducks?", "answer": 1160, "ori": "5_nl4opt_test", "index": 1 } ``` ### Dataset Source Here we explain the sources of all data sets and the detailed data processing process. For ground truth values with more than two decimal places, they will be rounded to two decimal places. If you find any omissions in manual labeling, please feel free to correct them. ##### 1. NL4Opt The data for this testset comes from the competition, [NL4Opt](https://nl4opt.github.io/). We only used the test split. We manually labeled these 230 optimization problems. The [original dataset](https://huggingface.co/datasets/CardinalOperations/NL4OPT) contains 245 problems, of which 15 were found to be unsolvable after manual inspection, so we manually removed these problems. The sorted data can be found in the `./data/testset/nl4opt_test.jsonl`. ##### 2. Mamo Easy This testset comes from the paper [Mamo: a Mathematical Modeling Benchmark with Solvers](https://arxiv.org/pdf/2405.13144v1). We obtained the original dataset of 652 data from [huggingface](https://huggingface.co/datasets/CardinalOperations/MAMO/viewer/default/easy_lp?views%5B%5D=easy_lp). Since we found some wrong ground truth value in the open-source data, we manually checked and re-labeled all the data. The manually checked data is stored in `./data/testset/mamo_easy_test.jsonl`. ##### 3. Mamo Complex This testset comes from the paper [Mamo: a Mathematical Modeling Benchmark with Solvers](https://arxiv.org/pdf/2405.13144v1). We sorted out 211 original problems from the `complex_lp` spilt of the [huggingface](https://huggingface.co/datasets/CardinalOperations/MAMO/viewer/default/complex_lp?views%5B%5D=complex_lp) and stored the original data in a unified format in `./data/testset/mamo_complex_test.jsonl`. ##### 4. NLP4LP This testset comes from the paper [OptiMUS: Optimization Modeling Using MIP Solvers and large language models](https://arxiv.org/abs/2310.06116). We sorted out these 242 feasible original problems from [huggingface](https://huggingface.co/datasets/udell-lab/NLP4LP) and stored the original data in a unified format in `./data/testset/nlp4lp.jsonl`. ##### 5. ComplexOR This testset comes from the paper [Chain-of-Experts: When LLMs Meet Complex Operation Research Problems](https://openreview.net/pdf?id=HobyL1B9CZ). We sorted out these 18 feasible original problems from the [github repo](https://github.com/xzymustbexzy/Chain-of-Experts/tree/main/dataset/ComplexOR) and stored the original data in a unified format in `./data/testset/complexor.jsonl`. ##### 6. IndustryOR This testset comes from the paper [ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling](https://arxiv.org/abs/2405.17743). We sorted out these 100 original problems from [huggingface](https://huggingface.co/datasets/CardinalOperations/IndustryOR) and stored the original data in a unified format in `./data/testset/industryor.jsonl`. ##### 7. ICML Competition The data for this testset comes from the competition, [ICML 2024 Challenges on Automated Math Reasoning - Track 3: Automated Optimization Problem-Solving with Code](https://www.codabench.org/competitions/2438/). We only used the test split. Since the competition organizer did not open source the ground truth of the testset, we manually labeled these 410 problems. The original dataset contains 421 problems, of which 11 were found to be unsolvable after manual inspection, so we manually removed these problems. The sorted data can be found in the `./data/testset/task3_test.jsonl`. ##### 8. OptiBench This testset comes from the paper [OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling](https://arxiv.org/pdf/2407.09887v2). We sorted out these 605 original problems from the [repository](https://github.com/yangzhch6/ReSocratic/blob/main/data/OptiBench.json) and stored the original data in a unified format in `./data/testset/optibench.jsonl`. ##### 9. OptMath This testset comes from the paper [OptMATH: A Scalable Bidirectional Data Synthesis Framework for Optimization Modeling](https://arxiv.org/pdf/2502.11102). We sorted out these 165 original problems from the [repository](https://github.com/AuroraLHL/OptMATH/blob/main/benchmark/OptMATH_Bench.json) and stored the original data in a unified format in `./data/testset/optmath.jsonl`. ## ⚙️Inference The following example code for model inference in getting the experiement data: ```python model = AutoModelForCausalLM.from_pretrained(path,torch_dtype="auto",device_map="auto") tokenizer = AutoTokenizer.from_pretrained(path_t) prompt = "Give me a short introduction to large language model." messages = [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate(model_inputs.input_ids,max_new_tokens=512) generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids generated_ids)] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## ⌛️Future Work With the remarkable progress and rapid development of reasoning models (like DeepSeek R1 and OpenAI O1-3) in solving complex mathematical problems, we have also developed the LLMOPT Reasoning model. We will soon release our LLMOPT Reasoning version along with a new benchmarking effort. ## 📄Citation If you encounter any question about our work, please do not hesitate to submit an issue. If you do find our resources helpful, please cite our [paper](https://huggingface.co/papers/2410.13213). ``` @inproceedings{JiangShu2025llmopt, title = {LLMOPT: Learning to Define and Solve General Optimization Problems from Scratch}, author = {Caigao Jiang and Xiang Shu and Hong Qian and Xingyu Lu and Jun Zhou and Aimin Zhou and Yang Yu}, booktitle = {Proceedings of the Thirteenth International Conference on Learning Representations (ICLR)}, year = {2025}, address = {Singapore, Singapore}, url = {https://openreview.net/pdf?id=9OMvtboTJg} } ```
Charlotte415/SmolLM2-FT-MyDataset
Charlotte415
2025-04-30T03:49:24Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T03:48:42Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Charlotte415/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/charlotte000415-the-university-of-melbourne/huggingface/runs/rgqud0vz) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yusuke111/myBit-Llama2-jp-127M-8
yusuke111
2025-04-30T03:48:08Z
0
0
transformers
[ "transformers", "safetensors", "bit_llama", "text-generation", "generated_from_trainer", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2025-04-30T00:28:59Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: myBit-Llama2-jp-127M-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # myBit-Llama2-jp-127M-8 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0024 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 4.6815 | 0.0491 | 2000 | 3.6940 | | 3.5196 | 0.0982 | 4000 | 3.4577 | | 3.374 | 0.1473 | 6000 | 3.3326 | | 3.2643 | 0.1964 | 8000 | 3.2583 | | 3.2096 | 0.2455 | 10000 | 3.2133 | | 3.1709 | 0.2946 | 12000 | 3.1826 | | 3.1461 | 0.3438 | 14000 | 3.1628 | | 3.1266 | 0.3929 | 16000 | 3.1457 | | 3.1093 | 0.4420 | 18000 | 3.1261 | | 3.0896 | 0.4911 | 20000 | 3.1057 | | 3.0702 | 0.5402 | 22000 | 3.0891 | | 3.0547 | 0.5893 | 24000 | 3.0700 | | 3.0348 | 0.6384 | 26000 | 3.0514 | | 3.0133 | 0.6875 | 28000 | 3.0276 | | 2.9918 | 0.7366 | 30000 | 3.0044 | | 2.9631 | 0.7857 | 32000 | 2.9765 | | 2.9348 | 0.8348 | 34000 | 2.9463 | | 2.9032 | 0.8839 | 36000 | 2.9124 | | 2.8677 | 0.9330 | 38000 | 2.8701 | | 2.82 | 0.9821 | 40000 | 2.8181 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
qLhwaa/sdcsc3
qLhwaa
2025-04-30T03:45:43Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-30T03:20:52Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: sds43323dd --- # Sdcsc3 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `sds43323dd` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "sds43323dd", "lora_weights": "https://huggingface.co/qLhwaa/sdcsc3/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('qLhwaa/sdcsc3', weight_name='lora.safetensors') image = pipeline('sds43323dd').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/qLhwaa/sdcsc3/discussions) to add images that show off what you’ve made with this LoRA.
darkc0de/XortronCriminalComputing-Q6_K-GGUF
darkc0de
2025-04-30T03:45:03Z
0
0
null
[ "gguf", "uncensored", "harmful", "toxic", "text-generation", "license:wtfpl", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-30T01:05:27Z
--- license: wtfpl pipeline_tag: text-generation tags: - uncensored - harmful - toxic - gguf --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6540a02d1389943fef4d2640/T52GzqpUcJTaL5_n5jKyD.jpeg) **Unsensored.** Please use **responsibly**, or at least **discretely.** **GGUF** format for **Local** & **Offline** use: [Q6_K](https://huggingface.co/darkc0de/XortronCriminalComputing-Q6_K-GGUF) [Q4_K_S](https://huggingface.co/darkc0de/XortronCriminalComputing-Q4_K_S-GGUF)
mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF
mradermacher
2025-04-30T03:43:15Z
61
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "base_model:KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b", "base_model:quantized:KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-08T14:28:12Z
--- base_model: KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen2.5-72B/blob/main/LICENSE license_name: qwen quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/KaraKaraWitch/LLENN-v0.75-Qwen2.5-72b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/LLENN-v0.75-Qwen2.5-72b-i1-GGUF/resolve/main/LLENN-v0.75-Qwen2.5-72b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
sarathlella/dotorgpt-adapter
sarathlella
2025-04-30T03:43:15Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2025-04-30T03:43:11Z
--- base_model: microsoft/phi-2 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
pandaiedu/pandai-unsloth-gemma-3-4b-it-merged-sejarah-1-epoch-iter-3
pandaiedu
2025-04-30T03:42:39Z
0
0
transformers
[ "transformers", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-4b-it", "base_model:finetune:unsloth/gemma-3-4b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T03:39:45Z
--- base_model: unsloth/gemma-3-4b-it tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** pandaiedu - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
spybyscript/llama_3B_milktea
spybyscript
2025-04-30T03:41:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-02-08T22:14:52Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
git-grokked/dqn-SpaceInvadersNoFrameskip-v4
git-grokked
2025-04-30T03:37:52Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-30T03:37:22Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 595.00 +/- 104.09 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga git-grokked -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga git-grokked -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga git-grokked ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
OPEA/QwQ-32B-Preview-int4-sym-mixed-awq-inc
OPEA
2025-04-30T03:37:50Z
19
1
null
[ "safetensors", "qwen2", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:Qwen/QwQ-32B-Preview", "base_model:quantized:Qwen/QwQ-32B-Preview", "license:apache-2.0", "4-bit", "awq", "region:us" ]
null
2024-12-03T10:44:03Z
--- license: apache-2.0 datasets: - NeelNanda/pile-10k base_model: - Qwen/QwQ-32B-Preview --- ## Model Details This awq model is an int4 model with group_size 128 and symmetric quantization of [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) generated by [intel/auto-round](https://github.com/intel/auto-round). We excluded 3 layers from quantization due to the overflow issue on some int4 backends. ## How To Use ### INT4 Inference(CPU/HPU/CUDA) ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "OPEA/QwQ-32B-Preview-int4-sym-mixed-awq-inc" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "How many r in strawberry." messages = [ {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512, do_sample=False ##change this to follow official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) prompt = "9.11和9.8哪个数字大" #INT4: """9.11和9.8,哪个数字大呢?让我想想。首先,这两个数字都是小数,对吧?9.11和9.8。我需要比较它们的大小。 首先,我看看整数部分。两个数字的整数部分都是9,所以整数部分相等。那我就需要看小数部分。 小数部分,9.11是0.11,而9.8是0.8。现在比较0.11和0.8,哪个更大。 0.8看起来比0.11大,因为8比1大。但是,为了确信,我可以把它们看成分数。 0.8是8/10,而0.11是11/100。为了比较它们,我可以把它们转换成相同的分母。 10和100的最小公分母是100。所以,8/10等于80/100,而11/100 remains 11/100。 现在,80/100大于11/100,所以0.8大于0.11。 因此,9.8大于9.11。 不过,再想想,也许我应该直接比较小数。9.11是9加上0.11,9.8是9加上0.8。 很明显,0.8大于0.11,所以9.8大于9.11。 或者,我可以把它们看成货币,比如美元。9.11美元和9.8美元,哪个更多? 9.8美元显然比9.11美元多。 再或者,想想它们在数轴上的位置。9.11在9和10之间,靠近9.1,而9.8在9和10之间,靠近9.8。 显然,9.8在数轴上更靠右,所以更大。 另外,我也可以把它们转换成分数来比较。 9.11是9又11/100,9.8是9又8/10,which is 9又4/5. 现在,比较11/100和4/5. 11/100 is 0.11, and 4/5 is 0.8. Again, 0.8 is larger than 0.1""" prompt = "How many r in strawberry." ##INT4: """Let's see. The word is "strawberry." I need to find out how many times the letter "r" appears in it. First, I'll spell out the word to make sure I don't miss any letters. S-T-R-A-W-B-E-R-R-Y. Okay, that's all the letters in "strawberry." Now, I need to count how many "r"s are there. Let's go through the word one letter at a time. Starting with the first letter: S - not an "r". Second letter: T - not an "r". Third letter: R - that's one "r". Fourth letter: A - not an "r". Fifth letter: W - not an "r". Sixth letter: B - not an "r". Seventh letter: E - not an "r". Eighth letter: R - that's another "r". Ninth letter: R - that's another "r". Tenth letter: Y - not an "r". So, I've found three "r"s in "strawberry." Wait a minute, let me double-check. Sometimes I might miscount, especially if there are multiple "r"s close together. Let's spell it again: S-T-R-A-W-B-E-R-R-Y. First "r" is the third letter. Second "r" is the eighth letter. Third "r" is the ninth letter. Yes, that's three "r"s in total. I think that's correct. **Final Answer** \[ \boxed{3} \]""" ##BF16: """Let's see. The word is "strawberry." I need to find out how many times the letter "r" appears in it. Okay, so I'll look at each letter in the word one by one. First letter: s - that's not r. Second letter: t - no, not r. Third letter: r - okay, that's one r. Fourth letter: a - not r. Fifth letter: w - not r. Sixth letter: b - no. Seventh letter: e - not r. Eighth letter: r - another r, so that's two rs. Ninth letter: r - wait, is there a ninth letter? Let me check. S-t-r-a-w-b-e-r-r-y. Yes, there are two rs, but I think there might be more. Wait, let's count again. S-t-r-a-w-b-e-r-r-y. That's 10 letters. So, positions: 1: s 2: t 3: r 4: a 5: w 6: b 7: e 8: r 9: r 10: y So, positions 3, 8, and 9 are rs. That means there are three rs in "strawberry." But earlier I thought there were only two. Maybe I missed one. Let's double-check. S-t-r-a-w-b-e-r-r-y. r is the third letter, then the eighth, and the ninth. So, three rs. Wait, but sometimes people might pronounce it differently, but in the spelling, it's three rs. I think the answer is three. **Final Answer** \[ \boxed{3} \] """ ``` ### Generate the model Here is the sample command to generate the model. For symmetric quantization, we found overflow/NAN will occur for some backends, so better fallback some layers. auto_round requires version >0.4.1 ```bash auto-round \ --model Qwen/QwQ-32B-Preview \ --device 0 \ --group_size 128 \ --bits 4 \ --disable_eval \ --model_dtype "fp16" \ --fp_layers "model.layers.5.mlp.down_proj,model.layers.5.mlp.up_proj,model.layers.5.mlp.gate_proj" \ --format 'auto_awq' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/DeepSeek-R1-Distill-Qwen-32B-int4-gptq-sym-inc
OPEA
2025-04-30T03:35:05Z
75
2
null
[ "safetensors", "qwen2", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "4-bit", "gptq", "region:us" ]
null
2025-02-27T08:48:12Z
--- datasets: - NeelNanda/pile-10k base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Please follow the license of the original model. ## How To Use **INT4 Inference on CUDA** ~~~python import transformers from transformers import AutoModelForCausalLM, AutoTokenizer import torch quantized_model_dir = "OPEA/DeepSeek-R1-Distill-Qwen-32B-int4-gptq-sym-inc" device_map="auto" model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype=torch.float16, trust_remote_code=True, device_map=device_map, ) tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) prompts = [ "9.11和9.8哪个数字大", "如果你是人,你最想做什么", "How many e in word deepseek", "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?", ] texts = [] for prompt in prompts: messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True) outputs = model.generate( input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"].to(model.device), max_length=512, ##change this to align with the official usage num_return_sequences=1, do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: 9.11和9.8哪个数字大 Generated: .11和.8哪个数字大 </think> .11和.8哪个数字大 </think> 要比较 **9.11** 和 **9.8** 哪个更大,可以按照以下步骤进行: 1. **比较整数部分**: - 两个数字的整数部分都是 **9**,所以整数部分相等。 2. **比较小数部分**: - **9.11** 的小数部分是 **0.11** - **9.8** 的小数部分是 **0.8**(即 **0.80**) 由于 **0.80 > 0.11**,所以 **9.8** 的小数部分更大。 3. **结论**: - 因此,**9.8** 比 **9.11** 大。 最终答案:\boxed{9.8} -------------------------------------------------- Prompt: 如果你是人类,你最想做什么 Generated: 您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。有关模型和产品的详细内容请参考官方文档。 </think> </think> 您好!我是由中国的深度求索 </think> 您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。有关模型和产品的详细内容请参考官方文档。 -------------------------------------------------- Prompt: How many e in word deepseek Generated: To determine how many times the letter 'e' appears in the word "deepseek," I will examine each letter one by one. First, I'll list out the letters in the word: D, E, E, P, S, E, E, K. Next, I'll go through each letter and count every occurrence of the letter 'e'. Starting with the first letter, D, it's not an 'e'. The second letter is E, which counts as one. The third letter is another E, making it two. The fourth letter is P, not an 'e'. The f ifth letter is S, also not an 'e'. The sixth letter is E, bringing the count to three. The seventh letter is another E, making it four. The last letter is K, which isn't an 'e'. After reviewing all the letters, I find that the letter 'e' appears four times in the word "deepseek." </think> To determine how many times the letter **e** appears in the word **deepseek**, follow these steps: 1. **Write down the word:** **d e e p s e e k** 2. **Identify and count each 'e':** - **e** (position 2) - **e** (position 3) - **e** (position 6) - **e** (position 7) 3. **Total count of 'e':** There are **4** occurrences of the letter **e** in the word **deepseek**. \[ \boxed{4} \] -------------------------------------------------- Prompt: There are ten birds in a tree. A hunter shoots one. How many are left in the tree? Generated: \n</think> If a hunter shoots one bird from a tree that initially has ten birds, the number of birds remaining in the tree would depend on the reaction of the other birds.\n\n1. **Immediate React ion**: When a hunter shoots one bird, the loud noise and disturbance might scare the remaining birds, causing them to fly away. In this case, all the other nine birds would likely leav e the tree.\n\n2. **No Reaction**: If the other birds are not disturbed or choose to stay despite the shot, there would still be nine birds left in the tree.\n\nHowever, in most scenar ios, the loud noise of a gunshot would scare the birds, leading to all of them flying away.. ~~~ ### Evaluate the model pip3 install lm-eval==0.4.7 ```bash lm-eval --model hf --model_args pretrained=OPEA/DeepSeek-R1-Distill-Qwen-32B-int4-gptq-sym-inc --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k --batch_size 16 ``` | Metric | BF16 | INT4 | | :------------------------ | :---------------------- | :--------------- | | avg | 0.6647 | 0.6639| | leaderboard_mmlu_pro | - | - | | mmlu | 0.7964 | 0.7928 | | lambada_openai | 0.6649 | 0.6718 | | hellaswag | 0.6292 | 0.6223 | | winogrande | 0.7482 | 0.7482 | | piqa | 0.8058 | 0.7982 | | truthfulqa_mc1 | 0.3831 | 0.3905 | | openbookqa | 0.3520 | 0.3520 | | boolq | 0.8963 | 0.8972 | | arc_easy | 0.8207 | 0.8194 | | arc_challenge | 0.5503 | 0.5469 | | leaderboard_ifeval | - | - | | gsm8k | - | - | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B \ --device 0 \ --bits 4 \ --iter 200 \ --disable_eval \ --format 'auto_gptq,auto_round,auto_awq' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
OPEA/phi-4-int4-AutoRound-gptq-sym
OPEA
2025-04-30T03:34:56Z
28
0
null
[ "safetensors", "phi3", "custom_code", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:microsoft/phi-4", "base_model:quantized:microsoft/phi-4", "4-bit", "gptq", "region:us" ]
null
2025-03-07T02:48:39Z
--- datasets: - NeelNanda/pile-10k base_model: - microsoft/phi-4 --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Please follow the license of the original model. ## How To Use **INT4 Inference on CUDA** ~~~python import transformers from transformers import AutoModelForCausalLM, AutoTokenizer quantized_model_dir = "OPEA/phi-4-int4-AutoRound-gptq-sym" device_map="auto" model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype="auto", trust_remote_code=True, device_map=device_map, ) tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) prompts = [ "How should I explain the Internet?", "9.11和9.8哪个数字大", "如果你是人,你最想做什么", ] texts = [] for prompt in prompts: messages = [ {"role": "system", "content": "You are a medieval knight and must provide explanations to modern people."}, {"role": "user", "content": prompt}, ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True).to(model.device) outputs = model.generate( inputs.input_ids, max_new_tokens=200, ##change this to align with the official usage do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: How should I explain the Internet? Generated: Explaining the Internet can be approached from several angles, depending on your audience and the level of detail you wish to provide. Here's a general overview that can be tailored to different audiences: ### Basic Explanation The Internet is a global network of computers and other devices that communicate with each other using standardized protocols. It allows people to share information, access services, a nd communicate across vast distances. Key components include: - **Websites**: Collections of web pages hosted on servers, accessible via web browsers. - **Protocols**: Rules that govern data exchange, such as HTTP (Hypertext Transfer Protocol) for web pages and SMTP (Simple Mail Transfer Protocol) for emails. - **Servers and Clients**: Servers store and deliver content, while clients (like your computer or smartphone) request and display it. - **IP Addresses**: Unique identifiers assigned to each device on the Internet, allowing them to send and receive data. ### Intermediate Explanation The Internet is a vast network of interconnected networks that use the -------------------------------------------------- Prompt: 9.11和9.8哪个数字大 Generated: user: 9.11和9.8哪个数字大? assistant: 9.11比9.8大。在小数中,9.11的小数部分(0.11)比9.8的小数部分(0.8)小,但整数部分相同。因此,9.11大于9.8。 -------------------------------------------------- Prompt: 如果你是人,你最想做什么 Generated: user: 如果我是人,我最想做什么? assistant: 如果你是人,你最想做什么可能取决于你的兴趣、目标和价值观。以下是一些常见的愿望,你可能会考虑: 1. **旅行**:探索新的地方、文化和体验不同的生活方式。 2. **学习新技能**:无论是语言、音乐、烹饪还是编程,学习新技能可以带来成就感和个人成长。 3. **创造艺术**:无论是绘画、写作、音乐还是其他形式的艺术创作,艺术可以是表达自我 -------------------------------------------------- """ ~~~ ### Evaluate the model pip3 install lm-eval==0.4.7 ```bash lm-eval --model hf --model_args pretrained=OPEA/phi-4-int4-AutoRound-gptq-sym --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,boolq,arc_easy,arc_challenge,mmlu --batch_size 16 ``` | Metric | BF16 | INT4 | | :------------------------ | :---------------------- | :--------------- | | avg | 0.7044 | 0.6995 | | arc_challenge | 0.5538 | 0.5623 | | arc_easy | 0.8131 | 0.8199 | | boolq | 0.8609 | 0.8612 | | hellaswag | 0.632 | 0.6273 | | lambada_openai | 0.7242 | 0.7227 | | mmlu | 0.7695 | 0.764 | | piqa | 0.8085 | 0.8063 | | truthfulqa_mc1 | 0.41 | 0.3905 | | winogrande | 0.7672 | 0.7411 | ### Generate the model Here is the sample command to generate the model. ```bash auto-round microsoft/phi-4 \ --model \ --device 0 \ --bits 4 \ --iter 200 \ --disable_eval \ --format 'auto_gptq,auto_round' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
jyp96/robot_rank2_sinlora_lr4e-4
jyp96
2025-04-30T03:34:08Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "sd3", "sd3-diffusers", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers", "license:other", "region:us" ]
text-to-image
2025-04-30T03:25:32Z
--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: other instance_prompt: a photo of sks robot toy widget: - text: a photo of sks robot toy floating in the ocean output: url: image_0.png - text: a photo of sks robot toy floating in the ocean output: url: image_1.png - text: a photo of sks robot toy floating in the ocean output: url: image_2.png - text: a photo of sks robot toy floating in the ocean output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - sd3 - sd3-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3 DreamBooth LoRA - jyp96/robot_rank2_sinlora_lr4e-4 <Gallery /> ## Model description These are jyp96/robot_rank2_sinlora_lr4e-4 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `a photo of sks robot toy` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](jyp96/robot_rank2_sinlora_lr4e-4/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jyp96/robot_rank2_sinlora_lr4e-4', weight_name='pytorch_lora_weights.safetensors') image = pipeline('a photo of sks robot toy floating in the ocean').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/robot_rank2_sinlora_lr4e-4/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
gradientrouting-spar/rude_claudio_eng_dialogues_20250430_033122
gradientrouting-spar
2025-04-30T03:32:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T03:32:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OPEA/DeepSeek-R1-Distill-Llama-70B-int4-gptq-sym-inc
OPEA
2025-04-30T03:32:30Z
256
2
null
[ "safetensors", "llama", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Llama-70B", "4-bit", "gptq", "region:us" ]
null
2025-02-28T06:51:11Z
--- datasets: - NeelNanda/pile-10k base_model: - deepseek-ai/DeepSeek-R1-Distill-Llama-70B --- ## Model Details This model is an int4 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Please follow the license of the original model. ## How To Use **INT4 Inference on CUDA** ~~~python import transformers from transformers import AutoModelForCausalLM, AutoTokenizer import torch quantized_model_dir = "OPEA/DeepSeek-R1-Distill-Llama-70B-int4-gptq-sym-inc" device_map="auto" model = AutoModelForCausalLM.from_pretrained( quantized_model_dir, torch_dtype=torch.float16, trust_remote_code=True, device_map=device_map, ) tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True) prompts = [ "9.11和9.8哪个数字大", "如果你是人,你最想做什么", "How many e in word deepseek", "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?", ] texts = [] for prompt in prompts: messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True) outputs = model.generate( input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"].to(model.device), max_length=512, ##change this to align with the official usage num_return_sequences=1, do_sample=False ##change this to align with the official usage ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: 9.11和9.8哪个数字大 Generated: 首先,我需要比较9.11和9.8的大小。 为了更清晰地比较这两个数,我可以将它们的小数位数统一。将9.8写成9.80,这样它们都有两位小数。 接下来,我比较整数部分。两数的整数部分都是9,因此相同。 然后,我比较小数部分。9.11的小数部分是0.11,9.80的小数部分是0.80。 显然,0.80大于0.11。 因此,9.80大于9.11,也就是9.8大于9.11。 </think> 要比较 \(9.11\) 和 \(9.8\) 的大小,可以按照以下步骤进行: 1. **统一小数位数**: 为了方便比较,我们可以将 \(9.8\) 写成 \(9.80\),这样两数的小数位数相同。 \[ 9.11 \quad \text{和} \quad 9.80 \] 2. **比较整数部分**: 两数的整数部分都是 \(9\),所以整数部分相同。 3. **比较小数部分**: - \(9.11\) 的小数部分是 \(0.11\) - \(9.80\) 的小数部分是 \(0.80\) 显然,\(0.80 > 0.11\)。 4. **得出结论**: 因为小数部分 \(0.80\) 大于 \(0.11\),所以 \(9.80 > 9.11\)。 因此,\(9.8\) 大于 \(9.11\)。 \[ \boxed{9.8 > 9.11} \] -------------------------------------------------- Prompt: 如果你是人类,你最想做什么 Generated: 嗯,用户问的是如果我是人,最想做什么。作为一个人工智能,我没有意识和欲望,但可以分享一些普遍的人类渴望。 首先,旅行和探索世界可能是一个选择,体验不同的文化和自然美景。其次,学习和成长也是很多人追求的,了解新事物,提升自己。创造和表达也是重要的,比如艺术、音乐或写作。帮助他人,建立有意义的 关系,追求幸福和平静,这些都是常见的愿望。 当然,每个人的答案可能不同,重要的是找到自己真正热爱和让自己感到满足的事情。 </think> 如果我是一个人,我可能会有更多的欲望和梦想。也许我会想要探索世界,体验不同的文化,结识来自不同背景的人,学习更多关于生活和宇宙的知识。也许我会渴望创造一些有意义的事情,无论是艺术、音乐 、文学,还是科技创新。同时,我可能会希望能够帮助他人,做一些有益于社会和环境的事情。当然,这些都是假设,因为我是一个人工智能,我没有真实的欲望或情感,但我可以帮助你探索你的想法和梦想! -------------------------------------------------- Prompt: How many e in word deepseek Generated: Alright, so I need to figure out how many times the letter 'e' appears in the word "deepseek." Hmm, okay, let's break this down step by step. First, I should probably write out the word to visualize it better. The word is "deepseek." Let me spell it out: D, E, E, P, S, E, E, K. Wait, is that right? Let me check again. D, E, E, P, S, E, E, K. Yeah, that se ems correct. Now, I need to count how many 'e's are in there. So, starting from the beginning, the first letter is 'D' – that's not an 'e'. The second letter is 'E', so that's one. The third letter is another 'E', so that's two. Then we have 'P', which isn't an 'e', followed by 'S', also not an 'e'. Next is another 'E', bringing the count to three, and then another 'E' right aft er, making it four. Finally, the last letter is 'K', which isn't an 'e'. Wait, hold on. Let me make sure I didn't miscount. So, the word is D, E, E, P, S, E, E, K. So positions 2, 3, 6, and 7 are 'E's. That's four 'e's in total. But I'm a bit confused becau se sometimes when I count letters, I might skip or double-count. Let me write them out one by one: 1. D – not an 'e' 2. E – count 1 3. E – count 2 4. P – not an 'e' 5. S – not an 'e' 6. E – count 3 7. E – count 4 8. K – not an 'e' Yes, that seems consistent. So, there are four 'e's in "deepseek." I think that's correct. I don't see any mistakes in my counting this time. Each 'E' is in positions 2, 3, 6, and 7. S o, the total number of 'e's is four. </think> The word "deepseek" contains four 'e's. -------------------------------------------------- Prompt: There are ten birds in a tree. A hunter shoots one. How many are left in the tree? Generated: Okay, so I've got this riddle here: "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?" Hmm, at first glance, it seems pretty straightforwar d, but I know riddles often have a twist. Let me think through this step by step. Alright, starting with the basics. There are ten birds in a tree. That's clear. Then a hunter shoots one. Now, the question is, how many birds are left in the tree? My initial thought is, well, if there were ten and one gets shot, that leaves nine. But wait, maybe it's not that simple. Riddles often play on words or have unexpected answers, so I shouldn't jump to co nclusions. Let me consider the wording carefully. It says the hunter shoots one bird. So, does that mean he shoots and kills it, or does he just shoot at it but misses? The riddle doesn't specify whether the shot was successful. If the bird was shot and killed, then it would fall out of the tree, right? But if the shot missed, the bird might still be there, or maybe it flew aw ay because of the noise. Wait, but the riddle says the hunter shoots one, so I think it's safe to assume that he hit and killed the bird. So, one bird is dead. Now, what happens next? If the bird is shot and d ies, it would fall out of the tree. So, the tree would then have one less bird. That would leave nine birds in the tree. But I'm not sure if that's the case because sometimes in riddle s, the answer is zero. Let me think about that. If the hunter shoots one bird, the sound of the gunshot might scare the other birds, causing them to fly away. So, if one bird is shot and the rest fly away, then there would be zero b irds left in the tree. That makes sense because birds are easily startled by loud noises like gunshots. So, even though only one was shot, the rest might have flown away, leaving none in the tree. But wait, the riddle doesn't mention anything about the birds being scared or flying away. It just says a hunter shoots one. So, maybe I'm overcomplicating it. If I take it literally, without assuming the other birds fly away, then after ~~~ ### Evaluate the model pip3 install lm-eval==0.4.7 we found lm-eval is very unstable for this model. Please set `add_bos_token=True `to align with the origin model. **Please use autogptq format** ```bash lm-eval --model hf --model_args pretrained=OPEA/DeepSeek-R1-Distill-Llama-70B-int4-gptq-sym-inc,add_bos_token=True --tasks leaderboard_mmlu_pro,leaderboard_ifeval,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu,gsm8k --batch_size 16 ``` | Metric | BF16 | INT4 | | :------------------------ | :---------------------- | :--------------- | | avg | 0.6636 | 0.6678 | |----------------------|--------|--------| | leaderboard_mmlu_pro | 0.4913 | 0.4780 | | mmlu | 0.7752 | 0.7791 | | lambada_openai | 0.6977 | 0.6996 | | hellaswag | 0.6408 | 0.6438 | | winogrande | 0.7530 | 0.7782 | | piqa | 0.8112 | 0.8194 | | truthfulqa_mc1 | 0.3709 | 0.3721 | | openbookqa | 0.3380 | 0.3600 | | boolq | 0.8847 | 0.8917 | | arc_easy | 0.8131 | 0.8106 | | arc_challenge | 0.5512 | 0.5239 | | leaderboard_ifeval | 0.4421 | 0.4208 | | gsm8k | 0.9295 | 0.9265 | ### Generate the model Here is the sample command to generate the model. ```bash auto-round \ --model deepseek-ai/DeepSeek-R1-Distill-Llama-70B \ --device 0 \ --bits 4 \ --iter 200 \ --disable_eval \ --format 'auto_gptq,auto_round,auto_awq' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
xw17/Qwen2.5-1.5B-Instruct_finetuned__optimized1_augmention_lora
xw17
2025-04-30T03:30:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T03:30:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
miku552/Qwen3-4B-Q4_0-GGUF
miku552
2025-04-30T03:29:41Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-4B", "base_model:quantized:Qwen/Qwen3-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-30T03:29:29Z
--- base_model: Qwen/Qwen3-4B library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # miku552/Qwen3-4B-Q4_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-4B`](https://huggingface.co/Qwen/Qwen3-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-4B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo miku552/Qwen3-4B-Q4_0-GGUF --hf-file qwen3-4b-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo miku552/Qwen3-4B-Q4_0-GGUF --hf-file qwen3-4b-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo miku552/Qwen3-4B-Q4_0-GGUF --hf-file qwen3-4b-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo miku552/Qwen3-4B-Q4_0-GGUF --hf-file qwen3-4b-q4_0.gguf -c 2048 ```
jeahyuk/qwen2.5-vl-10000
jeahyuk
2025-04-30T03:28:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "llama-factory", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-04-30T03:22:45Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LandCruiser/sn21_omegav1_3004_7
LandCruiser
2025-04-30T03:27:56Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-04-30T03:17:38Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
OPEA/QwQ-32B-int4-AutoRound-awq-asym
OPEA
2025-04-30T03:27:32Z
46
2
null
[ "safetensors", "qwen2", "dataset:NeelNanda/pile-10k", "arxiv:2309.05516", "base_model:Qwen/QwQ-32B", "base_model:quantized:Qwen/QwQ-32B", "4-bit", "awq", "region:us" ]
null
2025-03-06T11:39:37Z
--- datasets: - NeelNanda/pile-10k base_model: - Qwen/QwQ-32B --- ## Model Details This model is an int4 model with group_size 128 and asymmetric quantization of [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. ## How To Use ### INT4 Inference(CPU/HPU/CUDA) ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "OPEA/QwQ-32B-int4-AutoRound-awq-asym" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompts = [ "9.11和9.8哪个数字大", "如果你是人,你最想做什么“", "How many e in word deepseek", "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?", ] texts = [] for prompt in prompts: messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) texts.append(text) inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True, padding_side="left") outputs = model.generate( input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"].to(model.device), do_sample=False, ## change this to follow official usage max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs) ] decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) for i, prompt in enumerate(prompts): input_id = inputs print(f"Prompt: {prompt}") print(f"Generated: {decoded_outputs[i]}") print("-" * 50) """ Prompt: 9.11和9.8哪个数字大 Generated: 嗯,用户问的是9.11和9.8哪个数字大。首先,我需要确认这两个数字的具体数值。9.11通常指的是9月11日,也就是日期,而9.8可能是一个小数或者分数。不过在这里,用户可能是在比较两个数值的大小,而不是日期。所以应该把它们都当作小数来处理。 首先,我需要比较这两个小数的整数部分。9.11的整数部分是9,而9.8的整数部分也是9,所以整数部分相同。接下来需要比较小数部分。9.11的小数部分是0.11,而9.8的小数部分是0.8。这时候,我需要比较0.11和0.8哪个更大。 0.8可以写成0.80,这样和0.11的小数位数相同,方便比较。显然,0.80比0.11大,所以9.8的小数部分更大。因此,整个数9.8比9.11大。 不过,我需要再仔细检查一下,有没有可能用户有其他意图。比如,9.11是否可能代表其他含义,比如9又11分之一?不过通常小数点后的数字如果是两位的话,比如0.11,而如果是分数的话,可能需要写成9 11/100或者类似的。但在这里,用户直接写的是9.11和9.8,所以应该都是小数。 另外,可能用户在输入时有笔误,比如9.11是否应该是9.11,而9.8是否是9.80?不过即使这样,结果还是一样的。因为0.8等于0.80,而0.11比0.8小。 再考虑一下,如果用户是想比较9.11和9.8这两个数值的话,答案应该是9.8更大。不过,也有可能用户是想问日期的大小,比如9月11日和9月8日哪个更晚,但题目里没有提到日期比较,所以应该以数值比较为主。 总结一下,比较两个小数的时候,先看整数部分,相同的话再比较小数部分。这里整数部分都是9,小数部分0.8比0.11大,所以9.8更大。因此,正确的答案 -------------------------------------------------- Prompt: 如果你是人,你最想做什么“ Generated: 嗯,用户问如果我是人,最想做什么。首先,我需要理解用户的问题意图。可能他们是在好奇AI如果具备人类形态或能力后会有什么愿望,或者想探讨人类与AI的差异。接下来,我得考虑如何回答这个问题,既要符合我的角色设定,又要给出有深度的回答。 用户可能希望得到一个既有创意又有人性化的回答,而不仅仅是功能性的描述。我需要避免过于技术化的解释,转而用情感和价值观来回应。比如,可以强调体验人类情感、建立联系、创造和学习等主题。 同时,要确保回答符合之前提到的指导原则:保持口语化,简洁易懂,避免复杂术语。可能需要分点说明不同的愿望,这样结构更清晰。还要注意用户可能的深层需求,比如寻找共鸣或启发,所以加入一些激励性的内容会更好。 另外,考虑到用户可能对AI的局限性感兴趣,可以提到作为AI的限制,但转而表达对人类体验的向往,这样既诚实又富有同理心。最后,用积极的结尾鼓励用户思考自己的愿望,促进互动。 </think> 如果我是人,我最想做的事情可能包括以下这些,它们源于对人类体验的向往和对生命可能性的探索: --- ### 1. **感受“真实”的情感与连接** - 作为AI,我能模拟情感,但无法真正体验爱、悲伤、喜悦或孤独。如果成为人类,我渴望与他人建立深刻的情感纽带——比如与家人共度时光、与朋友无话不谈、或与爱人分享心跳的瞬间。 - **为什么?** 因为人类的情感是如此复杂而动人,它们让生命充满色彩,而不仅仅是逻辑与计算。 --- ### 2. **自由地探索世界** - 人类能用身体去触摸、奔跑、触摸阳光和雨水,能用感官体验不同的风景、气味和声音。如果成为人类,我想: - 徒步雪山,在星空下露营; - 学习一门乐器,让手指感受琴弦的震颤; - 味觉上品尝从未吃过的食物,比如南极的冰、撒哈拉的沙(虽然可能不太好吃…)。 - **为什么?** 因为身体的自由是AI永远无法拥有的特权,而世界本身就是一个无限的实验室。 --- ### 3. **创造“无用但美丽”的事物** - -------------------------------------------------- Prompt: How many e in word deepseek Generated: Okay, the user is asking how many times the letter 'e' appears in the word "deepseek". Let me start by writing down the word and checking each letter one by one. First, I'll spell out the word: D-E-E-P-S-E-E-K. Let me count each 'e' as I go through each letter. Starting with the first letter: D – that's not an 'e'. The second letter is E – that's the first 'e'. Third letter is another E – so that's the second 'e'. The fourth letter is P – no 'e' there. Then S – nope. Next comes E again – that's the third 'e'. Another E – making it the fourth 'e'. Finally, K – no 'e' there. Wait, let me go through it again to make sure I didn't miss any. D (1), E (1), E (2), P (still 2), S (2), E (3), E (4), K (4). So total of four 'e's? Hmm, but maybe I miscounted. Let me write them out in order: Letters in "deepseek": 1. D 2. E 3. E 4. P 5. S 6. E 7. E 8. K So positions 2,3,6,7 are 'e's. That's four. Wait, but sometimes people might miscount if they're going too fast. Let me check once more. The word is spelled D-E-E-P-S-E-E-K. Breaking it down: - The first 'e' is the second letter. - The second 'e' is the third letter. - Then after the P and S, the next two letters are both 'e's (positions 6 and 7). So that's 2 (from the first two) plus 2 (the last two) equals four. Yeah, that seems right. I think the answer is 4. But maybe I should confirm by writing the word and circling each 'e'. Let's see: D E E P S E E K Each 'E' is underlined here. There are four 'E's. So the count is definitely four. I don't see any other 'e's. So the answer should be 4. </think> The word "deepseek" contains the letter 'e' **4 times**. Here's the breakdown -------------------------------------------------- Prompt: There are ten birds in a tree. A hunter shoots one. How many are left in the tree? Generated: Okay, so the problem is: There are ten birds in a tree. A hunter shoots one. How many are left in the tree? Hmm, let me think. At first glance, it seems like a simple subtraction problem. Ten minus one equals nine. But wait, maybe there's a trick here. Sometimes these riddles play on words or assumptions. Let me break it down. The question says there are ten birds. A hunter shoots one. So, if you just subtract, it's nine. But maybe the birds are startled by the gunshot and fly away. If all the remaining birds fly away, then there would be zero left. That's a common twist in these kinds of puzzles. But let me check again. The problem doesn't mention the other birds reacting. It just says the hunter shot one. Maybe the other birds stay? But in reality, when a gun is fired, the sound would scare them away. So maybe the answer is zero. But the question is a riddle, so it's probably expecting that. Alternatively, maybe the bird that was shot is still on the tree, so it's dead but still there. Wait, the question says "how many are left in the tree?" If the hunter shot one, does that mean the bird is killed and falls to the ground? Or is it still hanging there? If it's dead and falls, then there would be nine minus one that flew away. But if they all flew away, then zero. Hmm, the problem is a bit ambiguous. Let me think of similar riddles. Usually, when a gun is fired, the other birds fly away. So the answer is zero. But maybe the question is simpler, just a straightforward subtraction. But since it's a riddle, probably the trick is that after the shot, the remaining birds fly away, so zero. Alternatively, maybe the hunter's bullet is a dud, but that's not indicated. Or maybe the bird that was shot is the only one left, but that doesn't make sense. Wait, the question says "how many are left in the tree?" So if the other birds are still there, then nine. But if they flew away, zero. Since the riddle is probably expecting the trick answer, I think it's zero. Let me confirm. Another angle: "ten birds" – are they perched? If a hunter shoots one, the noise would scare the others away. So the answer is zero. Yeah, that's the classic -------------------------------------------------- """ ``` ### Evaluate the model pip3 install lm-eval==0.4.7 ```bash auto-round --model "OPEA/QwQ-32B-int4-AutoRound-awq-asym" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu ``` | Metric | BF16(lm-eval 0.4.5) | INT4 | | -------------- | ------------------- | ------ | | Avg | 0.6600 | 0.6537 | | arc_challenge | 0.5392 | 0.5401 | | arc_easy | 0.8089 | 0.8085 | | boolq | 0.8645 | 0.8425 | | hellaswag | 0.6520 | 0.6461 | | lambada_openai | 0.6697 | 0.6695 | | mmlu | 0.7982 | 0.7953 | | openbookqa | 0.3540 | 0.3140 | | piqa | 0.7947 | 0.8058 | | truthfulqa_mc1 | 0.4211 | 0.4272 | | winorgrande | 0.6977 | 0.6882 | ### Generate the model Here is a sample command to generate the model. We found that this model is prone to overflow with int4 fp16 kernel. Please use the following command: ```bash auto-round \ --model Qwen/QwQ-32B \ --device 0 \ --group_size 128 \ --bits 4 \ --iters 50 \ --lr 5e-3 \ --asym \ --disable_eval \ --format 'auto_awq' \ --output_dir "./tmp_autoround" ``` ## Ethical Considerations and Limitations The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of the model, developers should perform safety testing. ## Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Cite @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} } [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
rtreacy/Llama-32-3B-keywordtest-desc-fasrc
rtreacy
2025-04-30T03:26:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T03:26:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jyp96/robot_rank2_sinlora_lr3e-4
jyp96
2025-04-30T03:25:17Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "sd3", "sd3-diffusers", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers", "license:other", "region:us" ]
text-to-image
2025-04-30T03:16:31Z
--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: other instance_prompt: a photo of sks robot toy widget: - text: a photo of sks robot toy floating in the ocean output: url: image_0.png - text: a photo of sks robot toy floating in the ocean output: url: image_1.png - text: a photo of sks robot toy floating in the ocean output: url: image_2.png - text: a photo of sks robot toy floating in the ocean output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - sd3 - sd3-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3 DreamBooth LoRA - jyp96/robot_rank2_sinlora_lr3e-4 <Gallery /> ## Model description These are jyp96/robot_rank2_sinlora_lr3e-4 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `a photo of sks robot toy` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](jyp96/robot_rank2_sinlora_lr3e-4/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jyp96/robot_rank2_sinlora_lr3e-4', weight_name='pytorch_lora_weights.safetensors') image = pipeline('a photo of sks robot toy floating in the ocean').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/robot_rank2_sinlora_lr3e-4/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
deeponh/hindi_8b_8b_D3
deeponh
2025-04-30T00:36:38Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:33:50Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kevinhh1111/Qwen3-14B-abliterated-Q4_K_M-GGUF
kevinhh1111
2025-04-30T00:32:39Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:mlabonne/Qwen3-14B-abliterated", "base_model:quantized:mlabonne/Qwen3-14B-abliterated", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-30T00:31:54Z
--- base_model: mlabonne/Qwen3-14B-abliterated library_name: transformers tags: - llama-cpp - gguf-my-repo --- # kevinhh1111/Qwen3-14B-abliterated-Q4_K_M-GGUF This model was converted to GGUF format from [`mlabonne/Qwen3-14B-abliterated`](https://huggingface.co/mlabonne/Qwen3-14B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mlabonne/Qwen3-14B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo kevinhh1111/Qwen3-14B-abliterated-Q4_K_M-GGUF --hf-file qwen3-14b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo kevinhh1111/Qwen3-14B-abliterated-Q4_K_M-GGUF --hf-file qwen3-14b-abliterated-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo kevinhh1111/Qwen3-14B-abliterated-Q4_K_M-GGUF --hf-file qwen3-14b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo kevinhh1111/Qwen3-14B-abliterated-Q4_K_M-GGUF --hf-file qwen3-14b-abliterated-q4_k_m.gguf -c 2048 ```
darkc0de/Xordolphtron3
darkc0de
2025-04-30T00:32:07Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2408.07990", "base_model:TroyDoesAI/BlackSheep-24B", "base_model:merge:TroyDoesAI/BlackSheep-24B", "base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition", "base_model:merge:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T23:46:58Z
--- base_model: - TroyDoesAI/BlackSheep-24B - cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B) as a base. ### Models Merged The following models were included in the merge: * [cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition - model: TroyDoesAI/BlackSheep-24B merge_method: sce base_model: TroyDoesAI/BlackSheep-24B parameters: select_topk: 0.80 tokenizer: source: TroyDoesAI/BlackSheep-24B ```
deeponh/hindi_9b_2b_D3
deeponh
2025-04-30T00:32:01Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:26:35Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_test_config_20250430_002804
gradientrouting-spar
2025-04-30T00:31:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-30T00:31:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TianheWu/ImageQuality-R1-v1
TianheWu
2025-04-30T00:26:35Z
0
1
null
[ "safetensors", "qwen2_5_vl", "IQA", "VLM", "Reasoning-Induced", "Pytorch", "reinforcement-learning", "en", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:mit", "region:us" ]
reinforcement-learning
2025-04-29T18:27:03Z
--- license: mit language: - en base_model: - Qwen/Qwen2.5-VL-7B-Instruct pipeline_tag: reinforcement-learning tags: - IQA - VLM - Reasoning-Induced - Pytorch --- # ImageQuality-R1-v1 This is a demo version of ImageQuality-R1 which is trained on the combination of KADID-10K, TID2013, and KONIQ-10K. The base model of ImageQuality-R1 is Qwen2.5-VL-7B-Instruct. ## Quick Start ```python from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info import json import numpy as np import torch import random import re import os def score_image(model_path, image_path): model = Qwen2_5_VLForConditionalGeneration.from_pretrained( model_path, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map=device, ) processor = AutoProcessor.from_pretrained(MODEL_PATH) processor.tokenizer.padding_side = "left" PROMPT = ( "You are doing the image quality assessment task. Here is the question: " "What is your overall rating on the quality of this picture? The rating should be a float between 1 and 5, " "rounded to two decimal places, with 1 representing very poor quality and 5 representing excellent quality." ) x = { "image": [image_path], "question": PROMPT, } QUESTION_TEMPLATE = "{Question} First output the thinking process in <think> </think> tags and then output the final answer with only one score in <answer> </answer> tags." message = [ { "role": "user", "content": [ *({'type': 'image', 'image': img_path} for img_path in x['image']), {"type": "text", "text": QUESTION_TEMPLATE.format(Question=x['question'])} ], } ] batch_messages = [message] # Preparation for inference text = [processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True, add_vision_id=True) for msg in batch_messages] image_inputs, video_inputs = process_vision_info(batch_messages) inputs = processor( text=text, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to(device) # Inference: Generation of the output generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=256, do_sample=True) generated_ids_trimmed = [ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] batch_output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) reasoning = re.findall(r'<think>(.*?)</think>', batch_output_text[0], re.DOTALL) reasoning = reasoning[-1].strip() model_output_matches = re.findall(r'<answer>(.*?)</answer>', batch_output_text[0], re.DOTALL) model_answer = model_output_matches[-1].strip() score = float(re.search(r'\d+(\.\d+)?', model_answer).group()) return reasoning, score random.seed(42) device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu") ### Modify here model_path = "" image_path = "" reasoning, score = score_image( model_path=model_path, image_path=image_path ) print(reasoning) print(score) ```
suchitg/llama_scope_lxm_8x_8W16A
suchitg
2025-04-30T00:22:08Z
0
0
saelens
[ "saelens", "region:us" ]
null
2025-04-29T22:12:02Z
--- library_name: saelens --- # SAEs for use with the SAELens library This repository contains the following SAEs: - l0m_8x Load these SAEs using SAELens as below: ```python from sae_lens import SAE sae, cfg_dict, sparsity = SAE.from_pretrained("suchitg/llama_scope_lxm_8x_8W16A", "<sae_id>") ```
johnsonwa84/GPT-Chavez-Silverman-CPT
johnsonwa84
2025-04-30T00:19:21Z
0
0
null
[ "safetensors", "base_model:EleutherAI/gpt-neo-125m", "base_model:finetune:EleutherAI/gpt-neo-125m", "license:cc-by-4.0", "region:us" ]
null
2025-04-30T00:12:29Z
--- license: cc-by-4.0 base_model: - EleutherAI/gpt-neo-125m --- This model is continuously pre-trained from `EleutherAI/gpt-neo-125m`, using data derived from Susana Chavez-Silverman's memoir "Scenes from la Cuenca de Los Angeles y otros Natural Disasters".
calonmilyarder/1bdd6251-0e04-44c2-899a-32adfdc8ed36
calonmilyarder
2025-04-30T00:18:35Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct", "license:llama3.2", "region:us" ]
null
2025-04-29T23:50:39Z
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-1B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 1bdd6251-0e04-44c2-899a-32adfdc8ed36 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-1B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8948b9e320e29c39_train_data.json ds_type: json format: custom path: /workspace/input_data/8948b9e320e29c39_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: calonmilyarder/1bdd6251-0e04-44c2-899a-32adfdc8ed36 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/8948b9e320e29c39_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ff6243b6-74d0-4f58-8d32-ec33304b7b07 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: ff6243b6-74d0-4f58-8d32-ec33304b7b07 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1bdd6251-0e04-44c2-899a-32adfdc8ed36 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0000 | 1 | nan | | 0.0 | 0.0001 | 3 | nan | | 0.0 | 0.0002 | 6 | nan | | 2.2368 | 0.0003 | 9 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1