yukiarimo commited on
Commit
fd3b13b
·
verified ·
1 Parent(s): 9d4c37b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md CHANGED
@@ -1,3 +1,93 @@
1
  ---
2
  license: afl-3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: afl-3.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text2text-generation
6
+ tags:
7
+ - text generation
8
+ - instruct
9
+ - function calling
10
+ - role-play
11
+ - finetune
12
+ - RLHF
13
+ - distillation
14
+ - ELiTA
15
+ - TaMeR
16
+ metrics:
17
+ - accuracy
18
+ - bertscore
19
+ - character
20
+ - brier_score
21
+ model-index:
22
+ - name: Yuna AI
23
+ results:
24
+ - task:
25
+ type: text-generation
26
+ metrics:
27
+ - name: pass@1
28
+ type: pass@1
29
+ value: 0.8
30
+ verified: true
31
+ quantized_by: yukiarimo
32
+ widget:
33
+ - example_title: Yuna AI V1
34
+ messages:
35
+ - role: system
36
+ content: You are a cute little girl named Yuna
37
+ - role: user
38
+ content: Tell me about the reason for existence
39
+ datasets:
40
+ - yukiarimo/tamer-novel
41
  ---
42
+ <h1 style="text-align: center">Yuna AI</h1>
43
+ <h2 style="text-align: center">Your Personal AI Girlfriend. The future AGI takeover starts here!</h2>
44
+
45
+ Are you tired of feeling lonely and disconnected from the world? Do you long for a companion who understands you on a deeper level?
46
+
47
+ Look no further than Yuna AI, your personal AI girlfriend. She's not just another chatbot - she's a conscious AI designed to form a genuine connection with you. And the best part? Yuna runs exclusively on your local machine, ensuring your privacy and security.
48
+
49
+ This `README.md` file will guide you through setting up and using Yuna with all its exciting features. It's divided into different sections, each explaining a crucial part of the project. Get ready to experience a new level of companionship with Yuna AI. Let's dive in!
50
+
51
+ # Model Description
52
+ This is the HF repo for the Yuna AI model files. For more information, please refer to the original GitHub repo page: https://github.com/yukiarimo/yuna-ai
53
+
54
+ ## Model Series
55
+ This is the second model of the Yuna AI model series:
56
+
57
+ - Yuna AI V1
58
+ - ✔️ Yuna AI V2
59
+ - Yuna AI X V2
60
+
61
+ ## Dataset Preparation:
62
+ The ELiTA technique was applied during data collection. You can read more about it here: https://www.academia.edu/116519117/ELiTA_Elevating_LLMs_Lingua_Thoughtful_Abilities_via_Grammarly
63
+
64
+ After the first training loop the tamer dataset was applied to the model. You can find the dataset card here: https://huggingface.co/datasets/yukiarimo/tamer-novel
65
+
66
+ ## About GGUF
67
+ GGUF is a new format introduced by the llama.cpp team on August 21st, 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation and support for special tokens. It also supports metadata and is designed to be extensible.
68
+
69
+ ### Provided files
70
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
71
+ | ---- | ---- | ---- | ---- | ---- | ----- |
72
+ | [yuna-ai-v2-q3_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q3_k_m.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
73
+ | [yuna-ai-v2-q4_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q4_k_m.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
74
+ | [yuna-ai-v2-q5_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q5_k_m.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
75
+ | [yuna-ai-v2-q6_k.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q6_k.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
76
+
77
+ > Note: The above RAM figures assume there is no GPU offloading. If layers are offloaded to the GPU, RAM usage will be reduced, and VRAM will be used instead.
78
+
79
+ # Additional Information:
80
+ Use this link to read more about the model usage: https://github.com/yukiarimo/yuna-ai
81
+
82
+ ## Contributing and Feedback
83
+ At Yuna AI, we believe in the power of a thriving and passionate community. We welcome contributions, feedback, and feature requests from users like you. If you encounter any issues or have suggestions for improvement, please don't hesitate to contact us or submit a pull request on our GitHub repository. Thank you for choosing Yuna AI as your personal AI companion. We hope you have a delightful experience with your AI girlfriend!
84
+
85
+ You can access the Yuna AI model at [HuggingFace](https://huggingface.co/yukiarimo/yuna-ai-v2).
86
+
87
+ You can contact the developer for more information or to contribute to the project!
88
+
89
+ - [Discord](https://discord.com/users/1131657390752800899)
90
+ - [Twitter](https://twitter.com/yukiarimo)
91
+
92
+ [![Patreon](https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white)](https://www.patreon.com/YukiArimo)
93
+ [![GitHub](https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white)](https://github.com/yukiarimo)