morriszms's picture
Upload folder using huggingface_hub
e4a3fa1 verified
metadata
language:
  - en
license: apache-2.0
base_model: Felladrin/TinyMistral-248M-Chat-v2
datasets:
  - HuggingFaceH4/ultrachat_200k
  - Felladrin/ChatML-ultrachat_200k
  - Open-Orca/OpenOrca
  - Felladrin/ChatML-OpenOrca
  - hkust-nlp/deita-10k-v0
  - Felladrin/ChatML-deita-10k-v0
  - LDJnr/Capybara
  - Felladrin/ChatML-Capybara
  - databricks/databricks-dolly-15k
  - Felladrin/ChatML-databricks-dolly-15k
  - euclaise/reddit-instruct-curated
  - Felladrin/ChatML-reddit-instruct-curated
  - CohereForAI/aya_dataset
  - Felladrin/ChatML-aya_dataset
  - HuggingFaceH4/ultrafeedback_binarized
pipeline_tag: text-generation
widget:
  - messages:
      - role: system
        content: >-
          You are a highly knowledgeable and friendly assistant. Your goal is to
          understand and respond to user inquiries with clarity. Your
          interactions are always respectful, helpful, and focused on delivering
          the most accurate information to the user.
      - role: user
        content: Hey! Got a question for you!
      - role: assistant
        content: Sure! What's it?
      - role: user
        content: What are some potential applications for quantum computing?
  - messages:
      - role: user
        content: Heya!
      - role: assistant
        content: Hi! How may I help you?
      - role: user
        content: >-
          I'm interested in developing a career in software engineering. What
          would you recommend me to do?
  - messages:
      - role: user
        content: Morning!
      - role: assistant
        content: Good morning! How can I help you today?
      - role: user
        content: Could you give me some tips for becoming a healthier person?
  - messages:
      - role: system
        content: >-
          You are a very creative assistant. User will give you a task, which
          you should complete with all your knowledge.
      - role: user
        content: >-
          Hello! Can you please elaborate a background story of an RPG game
          about wizards and dragons in a sci-fi world?
tags:
  - TensorBlock
  - GGUF
TensorBlock

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server

Felladrin/TinyMistral-248M-Chat-v2 - GGUF

This repo contains GGUF format model files for Felladrin/TinyMistral-248M-Chat-v2.

The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b5165.

Our projects

Awesome MCP Servers TensorBlock Studio
Project A Project B
A comprehensive collection of Model Context Protocol (MCP) servers. A lightweight, open, and extensible multi-LLM interaction studio.
πŸ‘€ See what we built πŸ‘€ πŸ‘€ See what we built πŸ‘€

Prompt template

<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Model file specification

Filename Quant type File Size Description
TinyMistral-248M-Chat-v2-Q2_K.gguf Q2_K 0.105 GB smallest, significant quality loss - not recommended for most purposes
TinyMistral-248M-Chat-v2-Q3_K_S.gguf Q3_K_S 0.120 GB very small, high quality loss
TinyMistral-248M-Chat-v2-Q3_K_M.gguf Q3_K_M 0.129 GB very small, high quality loss
TinyMistral-248M-Chat-v2-Q3_K_L.gguf Q3_K_L 0.137 GB small, substantial quality loss
TinyMistral-248M-Chat-v2-Q4_0.gguf Q4_0 0.149 GB legacy; small, very high quality loss - prefer using Q3_K_M
TinyMistral-248M-Chat-v2-Q4_K_S.gguf Q4_K_S 0.149 GB small, greater quality loss
TinyMistral-248M-Chat-v2-Q4_K_M.gguf Q4_K_M 0.156 GB medium, balanced quality - recommended
TinyMistral-248M-Chat-v2-Q5_0.gguf Q5_0 0.176 GB legacy; medium, balanced quality - prefer using Q4_K_M
TinyMistral-248M-Chat-v2-Q5_K_S.gguf Q5_K_S 0.176 GB large, low quality loss - recommended
TinyMistral-248M-Chat-v2-Q5_K_M.gguf Q5_K_M 0.179 GB large, very low quality loss - recommended
TinyMistral-248M-Chat-v2-Q6_K.gguf Q6_K 0.204 GB very large, extremely low quality loss
TinyMistral-248M-Chat-v2-Q8_0.gguf Q8_0 0.264 GB very large, extremely low quality loss - not recommended

Downloading instruction

Command line

Firstly, install Huggingface Client

pip install -U "huggingface_hub[cli]"

Then, downoad the individual model file the a local directory

huggingface-cli download tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF --include "TinyMistral-248M-Chat-v2-Q2_K.gguf" --local-dir MY_LOCAL_DIR

If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf), you can try:

huggingface-cli download tensorblock/Felladrin_TinyMistral-248M-Chat-v2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'