
Nellyw888/VeriReason-codeLlama-7b-RTLCoder-Verilog-GRPO-reasoning-tb - GGUF
This repo contains GGUF format model files for Nellyw888/VeriReason-codeLlama-7b-RTLCoder-Verilog-GRPO-reasoning-tb.
The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b5753.
Our projects
Forge | |
---|---|
![]() |
|
An OpenAI-compatible multi-provider routing layer. | |
π Try it now! π | |
Awesome MCP Servers | TensorBlock Studio |
![]() |
![]() |
A comprehensive collection of Model Context Protocol (MCP) servers. | A lightweight, open, and extensible multi-LLM interaction studio. |
π See what we built π | π See what we built π |
Prompt template
<s>[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST]
Model file specification
Downloading instruction
Command line
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, downoad the individual model file the a local directory
huggingface-cli download tensorblock/Nellyw888_VeriReason-codeLlama-7b-RTLCoder-Verilog-GRPO-reasoning-tb-GGUF --include "VeriReason-codeLlama-7b-RTLCoder-Verilog-GRPO-reasoning-tb-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/Nellyw888_VeriReason-codeLlama-7b-RTLCoder-Verilog-GRPO-reasoning-tb-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
- Downloads last month
- 135
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit