File size: 1,127 Bytes
3b14208
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3a6bc0
3b14208
 
 
413551f
 
 
 
 
9b4348c
413551f
 
3b14208
9b4348c
 
3b14208
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
base_model: yentinglin/Llama-3-Taiwan-70B-Instruct
language:
- zh
- en
license: llama3
model_creator: yentinglin
model_name: Llama-3-Taiwan-70B-Instruct
model_type: llama
pipeline_tag: text-generation
quantized_by: minyichen
tags:
- llama-3
---

# Llama-3-Taiwan-70B-Instruct - GPTQ
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)

<!-- description start -->
## Description

This repo contains GPTQ model files for [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct).

<!-- description end -->
<!-- repositories-available start -->
* [GPTQ models for GPU inference](https://huggingface.co/minyichen/Llama-3-Taiwan-70B-Instruct-GPTQ)
* [Yen-Ting Lin's original unquantized  model](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
<!-- repositories-available end -->

## Quantization parameter

- Bits : 4
- Group Size : 128
- Act Order : Yes
- Damp % : 0.1
- Seq Len : 2048
- Size : 37.07 GB

It tooks about 6.5 hrs to quantize on H100.