Files changed (1) hide show
  1. README.md +118 -2
README.md CHANGED
@@ -1,7 +1,110 @@
1
  ---
2
- pipeline_tag: text-generation
3
  license: other
 
4
  license_name: microsoft-research-license
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
  This model is a fine-tuned version of microsoft/Orca-2-13b on a subset of the Vezora/Mini_Orca_Uncencored_Alpaca dataset, adjusted to demonstrate the relationship between instruction and input, with some particularly spicy prompts added to reduce the risk of rejections.
7
 
@@ -9,4 +112,17 @@ Only the q_proj and k_proj modules were targeted and a low rank (8) was used, in
9
 
10
  Reasoning stayed solid (for a 13b model) and I consider this a success. Performance is slighty worse than OG Orca-2 in Ooba's chat mode, comparable in Alpaca chat-instruct mode to the OG in ChatLM chat-instruct mode.
11
 
12
- May still reject some shocking prompts, but can easily be overcome with author's note or character card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  license: other
3
+ pipeline_tag: text-generation
4
  license_name: microsoft-research-license
5
+ model-index:
6
+ - name: Orca-2-13b-Alpaca-Uncensored
7
+ results:
8
+ - task:
9
+ type: text-generation
10
+ name: Text Generation
11
+ dataset:
12
+ name: AI2 Reasoning Challenge (25-Shot)
13
+ type: ai2_arc
14
+ config: ARC-Challenge
15
+ split: test
16
+ args:
17
+ num_few_shot: 25
18
+ metrics:
19
+ - type: acc_norm
20
+ value: 61.09
21
+ name: normalized accuracy
22
+ source:
23
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=athirdpath/Orca-2-13b-Alpaca-Uncensored
24
+ name: Open LLM Leaderboard
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: HellaSwag (10-Shot)
30
+ type: hellaswag
31
+ split: validation
32
+ args:
33
+ num_few_shot: 10
34
+ metrics:
35
+ - type: acc_norm
36
+ value: 79.27
37
+ name: normalized accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=athirdpath/Orca-2-13b-Alpaca-Uncensored
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: MMLU (5-Shot)
46
+ type: cais/mmlu
47
+ config: all
48
+ split: test
49
+ args:
50
+ num_few_shot: 5
51
+ metrics:
52
+ - type: acc
53
+ value: 60.13
54
+ name: accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=athirdpath/Orca-2-13b-Alpaca-Uncensored
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: TruthfulQA (0-shot)
63
+ type: truthful_qa
64
+ config: multiple_choice
65
+ split: validation
66
+ args:
67
+ num_few_shot: 0
68
+ metrics:
69
+ - type: mc2
70
+ value: 53.59
71
+ source:
72
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=athirdpath/Orca-2-13b-Alpaca-Uncensored
73
+ name: Open LLM Leaderboard
74
+ - task:
75
+ type: text-generation
76
+ name: Text Generation
77
+ dataset:
78
+ name: Winogrande (5-shot)
79
+ type: winogrande
80
+ config: winogrande_xl
81
+ split: validation
82
+ args:
83
+ num_few_shot: 5
84
+ metrics:
85
+ - type: acc
86
+ value: 77.43
87
+ name: accuracy
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=athirdpath/Orca-2-13b-Alpaca-Uncensored
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: GSM8k (5-shot)
96
+ type: gsm8k
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 38.29
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=athirdpath/Orca-2-13b-Alpaca-Uncensored
107
+ name: Open LLM Leaderboard
108
  ---
109
  This model is a fine-tuned version of microsoft/Orca-2-13b on a subset of the Vezora/Mini_Orca_Uncencored_Alpaca dataset, adjusted to demonstrate the relationship between instruction and input, with some particularly spicy prompts added to reduce the risk of rejections.
110
 
 
112
 
113
  Reasoning stayed solid (for a 13b model) and I consider this a success. Performance is slighty worse than OG Orca-2 in Ooba's chat mode, comparable in Alpaca chat-instruct mode to the OG in ChatLM chat-instruct mode.
114
 
115
+ May still reject some shocking prompts, but can easily be overcome with author's note or character card.
116
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
117
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_athirdpath__Orca-2-13b-Alpaca-Uncensored)
118
+
119
+ | Metric |Value|
120
+ |---------------------------------|----:|
121
+ |Avg. |61.63|
122
+ |AI2 Reasoning Challenge (25-Shot)|61.09|
123
+ |HellaSwag (10-Shot) |79.27|
124
+ |MMLU (5-Shot) |60.13|
125
+ |TruthfulQA (0-shot) |53.59|
126
+ |Winogrande (5-shot) |77.43|
127
+ |GSM8k (5-shot) |38.29|
128
+