metadata
pipeline_tag: text-generation
library_name: transformers
base_model: internlm/internlm3-8b-instruct
tags:
- llama-cpp
- internlm3-8b-instruct
- gguf
- Q6_K
- 8b
- 6-bit
- internlm3
- llama-cpp
- internlm
- code
- math
- chat
- roleplay
- text-generation
- safetensors
- nlp
- code
roleplaiapp/internlm3-8b-instruct-Q6_K-GGUF
Repo: roleplaiapp/internlm3-8b-instruct-Q6_K-GGUF
Original Model: internlm3-8b-instruct
Organization: internlm
Quantized File: internlm3-8b-instruct-q6_k.gguf
Quantization: GGUF
Quantization Method: Q6_K
Use Imatrix: False
Split Model: False
Overview
This is an GGUF Q6_K quantized version of internlm3-8b-instruct.
Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai