Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
inference: false
|
7 |
+
tags:
|
8 |
+
- transformers
|
9 |
+
- gguf
|
10 |
+
- imatrix
|
11 |
+
- Maverick-7B
|
12 |
+
---
|
13 |
+
|
14 |
+
Quantizations of https://huggingface.co/feeltheAGI/Maverick-7B
|
15 |
+
|
16 |
+
|
17 |
+
### Open source inference clients/UIs
|
18 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
19 |
+
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
|
20 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
21 |
+
* [ollama](https://github.com/ollama/ollama)
|
22 |
+
* [jan](https://github.com/janhq/jan)
|
23 |
+
|
24 |
+
### Closed source inference clients/UIs
|
25 |
+
* [LM Studio](https://lmstudio.ai/)
|
26 |
+
* [Backyard AI](https://backyard.ai/)
|
27 |
+
* More will be added...
|
28 |
+
---
|
29 |
+
|
30 |
+
# From original readme
|
31 |
+
|
32 |
+
This model is a merge of the following models:
|
33 |
+
* [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp)
|
34 |
+
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
|
35 |
+
|
36 |
+
|
37 |
+
## 🏆 Evaluation
|
38 |
+
|
39 |
+
### TruthfulQA
|
40 |
+
|
41 |
+
| Task |Version|Metric|Value | |Stderr|
|
42 |
+
|-------------|------:|------|-----:|---|-----:|
|
43 |
+
|truthfulqa_mc| 1|mc1 |0.5165|± |0.0175|
|
44 |
+
| | |mc2 |0.6661|± |0.0152|
|
45 |
+
|
46 |
+
### GPT4ALL
|
47 |
+
|
48 |
+
| Task |Version| Metric |Value | |Stderr|
|
49 |
+
|-------------|------:|--------|-----:|---|-----:|
|
50 |
+
|arc_challenge| 0|acc |0.6442|± |0.0140|
|
51 |
+
| | |acc_norm|0.6570|± |0.0139|
|
52 |
+
|arc_easy | 0|acc |0.8645|± |0.0070|
|
53 |
+
| | |acc_norm|0.8304|± |0.0077|
|
54 |
+
|boolq | 1|acc |0.8850|± |0.0056|
|
55 |
+
|hellaswag | 0|acc |0.6813|± |0.0047|
|
56 |
+
| | |acc_norm|0.8571|± |0.0035|
|
57 |
+
|openbookqa | 0|acc |0.3640|± |0.0215|
|
58 |
+
| | |acc_norm|0.4800|± |0.0224|
|
59 |
+
|piqa | 0|acc |0.8324|± |0.0087|
|
60 |
+
| | |acc_norm|0.8460|± |0.0084|
|
61 |
+
|winogrande | 0|acc |0.7869|± |0.0115|
|
62 |
+
|
63 |
+
### AGIEval
|
64 |
+
|
65 |
+
| Task |Version| Metric |Value | |Stderr|
|
66 |
+
|------------------------------|------:|--------|-----:|---|-----:|
|
67 |
+
|agieval_aqua_rat | 0|acc |0.2717|± |0.0280|
|
68 |
+
| | |acc_norm|0.2559|± |0.0274|
|
69 |
+
|agieval_logiqa_en | 0|acc |0.3902|± |0.0191|
|
70 |
+
| | |acc_norm|0.3856|± |0.0191|
|
71 |
+
|agieval_lsat_ar | 0|acc |0.2565|± |0.0289|
|
72 |
+
| | |acc_norm|0.2478|± |0.0285|
|
73 |
+
|agieval_lsat_lr | 0|acc |0.5118|± |0.0222|
|
74 |
+
| | |acc_norm|0.5216|± |0.0221|
|
75 |
+
|agieval_lsat_rc | 0|acc |0.6543|± |0.0291|
|
76 |
+
| | |acc_norm|0.6506|± |0.0291|
|
77 |
+
|agieval_sat_en | 0|acc |0.7961|± |0.0281|
|
78 |
+
| | |acc_norm|0.8010|± |0.0279|
|
79 |
+
|agieval_sat_en_without_passage| 0|acc |0.4660|± |0.0348|
|
80 |
+
| | |acc_norm|0.4709|± |0.0349|
|
81 |
+
|agieval_sat_math | 0|acc |0.3227|± |0.0316|
|
82 |
+
| | |acc_norm|0.3045|± |0.0311|
|
83 |
+
|
84 |
+
|
85 |
+
### Bigbench
|
86 |
+
|
87 |
+
| Task |Version| Metric |Value | |Stderr|
|
88 |
+
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|
89 |
+
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5684|± |0.0360|
|
90 |
+
|bigbench_date_understanding | 0|multiple_choice_grade|0.6612|± |0.0247|
|
91 |
+
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.4380|± |0.0309|
|
92 |
+
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2173|± |0.0218|
|
93 |
+
| | |exact_str_match |0.0000|± |0.0000|
|
94 |
+
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3320|± |0.0211|
|
95 |
+
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2243|± |0.0158|
|
96 |
+
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5667|± |0.0287|
|
97 |
+
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4260|± |0.0221|
|
98 |
+
|bigbench_navigate | 0|multiple_choice_grade|0.5310|± |0.0158|
|
99 |
+
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7230|± |0.0100|
|
100 |
+
|bigbench_ruin_names | 0|multiple_choice_grade|0.5379|± |0.0236|
|
101 |
+
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2956|± |0.0145|
|
102 |
+
|bigbench_snarks | 0|multiple_choice_grade|0.6961|± |0.0343|
|
103 |
+
|bigbench_sports_understanding | 0|multiple_choice_grade|0.7424|± |0.0139|
|
104 |
+
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.4690|± |0.0158|
|
105 |
+
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2304|± |0.0119|
|
106 |
+
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1880|± |0.0093|
|
107 |
+
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5667|± |0.0287|
|