alokabhishek commited on
Commit
8ed7934
·
verified ·
1 Parent(s): de37a6e

Create Readme

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - 4bit
5
+ - llama
6
+ - llama-2
7
+ - facebook
8
+ - meta
9
+ - 7b
10
+ - quantized
11
+ - ExLlamaV2
12
+ - quantized
13
+ - exl2
14
+ - 5.0-bpw
15
+ license: llama2
16
+ pipeline_tag: text-generation
17
+ ---
18
+
19
+ # Llama-2-7b-chat-hf-5.0-bpw-exl2
20
+
21
+ <!-- Provide a quick summary of what the model is/does. -->
22
+
23
+ This repo contains 4-bit quantized (using ExLlamaV2) model of Meta's meta-llama/Llama-2-7b-chat-hf
24
+
25
+ ## Model Details
26
+
27
+ - Model creator: [Meta](https://huggingface.co/meta-llama)
28
+ - Original model: [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
29
+
30
+
31
+ ### About 4 bit quantization using bitsandbytes
32
+
33
+
34
+ - ExLlamaV2 github repo: [ExLlamaV2 github repo](https://github.com/turboderp/exllamav2)
35
+
36
+
37
+ # How to Get Started with the Model
38
+
39
+ Use the code below to get started with the model.
40
+
41
+ ## How to run from Python code
42
+
43
+ #### First install the package
44
+
45
+ #### Import
46
+
47
+
48
+ #### Use a pipeline as a high-level helper
49
+
50
+
51
+
52
+
53
+ ## Uses
54
+
55
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
56
+
57
+ ### Direct Use
58
+
59
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ### Downstream Use [optional]
64
+
65
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Out-of-Scope Use
70
+
71
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
72
+
73
+ [More Information Needed]
74
+
75
+ ## Bias, Risks, and Limitations
76
+
77
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
78
+
79
+ [More Information Needed]
80
+
81
+
82
+
83
+ ## Model Card Authors [optional]
84
+
85
+ [More Information Needed]
86
+
87
+ ## Model Card Contact
88
+
89
+ [More Information Needed]