minyichen commited on
Commit
3b14208
1 Parent(s): 6ae83b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -3
README.md CHANGED
@@ -1,3 +1,42 @@
1
- ---
2
- license: llama3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: yentinglin/Llama-3-Taiwan-70B-Instruct
3
+ language:
4
+ - zh
5
+ - en
6
+ license: llama3
7
+ model_creator: yentinglin
8
+ model_name: Llama-3-Taiwan-70B-Instruct
9
+ model_type: llama
10
+ pipeline_tag: text-generation
11
+ quantized_by: minyichen
12
+ tags:
13
+ - llama-3
14
+ ---
15
+
16
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/vlfv5sHbt4hBxb3YwULlU.png" alt="Taiwan LLM Logo" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
17
+
18
+ # Llama-3-Taiwan-70B-Instruct - GPTQ
19
+ - Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
20
+ - Original model: [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
21
+
22
+ <!-- description start -->
23
+ ## Description
24
+
25
+ This repo contains GPTQ model files for [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct).
26
+
27
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
28
+
29
+ <!-- description end -->
30
+ <!-- repositories-available start -->
31
+ * [GPTQ models for GPU inference](minyichen/Llama-3-Taiwan-70B-Instruct-GPTQ)
32
+ * [Yen-Ting Lin's original unquantized model](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
33
+ <!-- repositories-available end -->
34
+
35
+ <!-- prompt-template start -->
36
+ ## Prompt template: Vicuna
37
+
38
+ ```
39
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
40
+
41
+ ```
42
+ <!-- prompt-template end -->