File size: 1,125 Bytes
00be63b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
base_model: yentinglin/Llama-3-Taiwan-70B-Instruct
language:
- zh
- en
license: llama3
model_creator: yentinglin
model_name: Llama-3-Taiwan-70B-Instruct
model_type: llama
pipeline_tag: text-generation
quantized_by: minyichen
tags:
- llama-3
---
# Llama-3-Taiwan-70B-Instruct - GPTQ
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct).
<!-- description end -->
<!-- repositories-available start -->
* [GPTQ models for GPU inference](minyichen/Llama-3-Taiwan-70B-Instruct-GPTQ)
* [Yen-Ting Lin's original unquantized model](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
<!-- repositories-available end -->
## Quantization parameter
| Bits | group_size | Act Order | Damp % | Seq Len | Size |
| ---- | -- | --------- | ------ | ------------ | ------- |
| 4 | 128 | Yes | 0.01 | 2048 | 37.07GB|
|