Enderchef commited on
Commit
927736e
·
verified ·
1 Parent(s): c53c440

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +191 -3
README.md CHANGED
@@ -1,3 +1,191 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: iconn
4
+ license_link: LICENSE
5
+ library_name: transformers
6
+ tags:
7
+ - emotional-ai
8
+ - ICONN
9
+ - chatbot
10
+ - base
11
+ co2_eq_emissions:
12
+ emissions: 0.34
13
+ source: CodeCarbon
14
+ training_type: pretraining
15
+ geographical_location: US-West
16
+ hardware_used: 9 x B200
17
+ pipeline_tag: text-generation
18
+ ---
19
+ ![ICONN AI Logo](https://i.postimg.cc/gJwHqh1D/svgviewer-png-output.png)
20
+ <div align="center" style="line-height: 1;">
21
+ <a href="https://huggingface.co/collections/ICONNAI/iconn-1-6851e8a88ed4eb66b4fd0132" target="_blank" style="margin: 2px;">
22
+ <img alt="ICONN 1 Models" src="https://img.shields.io/badge/📦_ICONN_1_Models-HuggingFace-1CBEEF?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" />
23
+ </a>
24
+ <a href="https://huggingface.co/spaces/ICONNAI/ICONN-Mini-Chat" target="_blank" style="margin: 2px;">
25
+ <img alt="ICONN 1 Chat" src="https://img.shields.io/badge/💬_ICONN_1_Chat-Online-65C7F9?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" />
26
+ </a>
27
+ <a href="https://huggingface.co/ICONNAI" target="_blank" style="margin: 2px;">
28
+ <img alt="ICONN on Hugging Face" src="https://img.shields.io/badge/🤗_ICONN_on_HF-ICONNAI-A4BCF0?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" />
29
+ </a>
30
+ <a href="https://opensource.org/license/apache-2-0" target="_blank" style="margin: 2px;">
31
+ <img alt="License Apache 2.0" src="https://img.shields.io/badge/⚖️_License-Apache_2.0-5C63DA?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;" />
32
+ </a>
33
+ </div>
34
+
35
+ ## ICONN 1
36
+ Introducing **ICONN 1 Mini Beta**, a cutting-edge open-source AI model with just **7 billion parameters** — designed for natural, human-like language understanding and generation. Despite its compact size, it delivers powerful performance through efficient architecture and careful tuning. ICONN 1 Mini Beta represents the next step in accessible, conversational AI.
37
+
38
+ Developed entirely from scratch, ICONN-1-Mini-Beta is based on a new **ICONN** framework and comprises **7 billion parameters**.
39
+
40
+ ICONN-1 is released in three distinct forms to serve different application needs:
41
+ - **ICONN-1-Mini-Beta**(This model) is a small 7B model trained for a lightweight alternative to ICONN 1.
42
+ - **ICONN-1** is optimized for natural, emotionally resonant, and conversational interactions.
43
+ - **ICONN-e1** is a specialized variant of the model fine-tuned for advanced reasoning, critical analysis, and complex problem-solving.
44
+
45
+ Together, these models represent a major leap forward in the evolution of AI systems—demonstrating not only deep reasoning but also a commitment to openness, accessibility, and human-aligned intelligence.
46
+
47
+
48
+
49
+ ## Usage
50
+
51
+ To run **ICONN 1 Mini Beta**, you need:
52
+
53
+ - **Any hardware - CPU or GPU; Just make sure you have about 15GB storage space!**
54
+
55
+ > Run the code below to run ICONN 1 Mini Beta:
56
+
57
+ ```python
58
+ import os
59
+
60
+ import torch
61
+
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
63
+
64
+ from threading import Thread
65
+
66
+
67
+
68
+ model_id = "ICONNAI/ICONN-1-Mini-Beta"
69
+
70
+
71
+
72
+ try:
73
+
74
+ model = AutoModelForCausalLM.from_pretrained(
75
+
76
+ model_id, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True
77
+
78
+ )
79
+
80
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
81
+
82
+ except Exception as e:
83
+
84
+ exit(f"Exiting due to model loading error: {e}")
85
+
86
+
87
+
88
+ def generate_response(
89
+
90
+ message: str,
91
+
92
+ max_new_tokens: int = 2048,
93
+
94
+ temperature: float = 0.7,
95
+
96
+ top_p: float = 0.9,
97
+
98
+ top_k: int = 50,
99
+
100
+ repetition_penalty: float = 1.2,
101
+
102
+ ) -> str:
103
+
104
+ conversation = [{"role": "user", "content": message}]
105
+
106
+
107
+
108
+ try:
109
+
110
+ input_ids = tokenizer.apply_chat_template(
111
+
112
+ conversation, return_tensors="pt", enable_thinking=True
113
+
114
+ )
115
+
116
+ except Exception as e:
117
+
118
+ return f"Error applying chat template: {e}"
119
+
120
+
121
+
122
+ input_ids = input_ids.to(model.device)
123
+
124
+
125
+
126
+ streamer = TextIteratorStreamer(tokenizer, timeout=20.0, skip_prompt=True, skip_special_tokens=True)
127
+
128
+
129
+
130
+ adjusted_top_k = int(max(1, top_k))
131
+
132
+
133
+
134
+ generate_kwargs = dict(
135
+
136
+ {"input_ids": input_ids},
137
+
138
+ streamer=streamer,
139
+
140
+ max_new_tokens=max_new_tokens,
141
+
142
+ do_sample=True,
143
+
144
+ top_p=top_p,
145
+
146
+ top_k=adjusted_top_k,
147
+
148
+ temperature=temperature,
149
+
150
+ num_beams=1,
151
+
152
+ repetition_penalty=repetition_penalty,
153
+
154
+ )
155
+
156
+
157
+
158
+ try:
159
+
160
+ t = Thread(target=model.generate, kwargs=generate_kwargs)
161
+
162
+ t.start()
163
+
164
+ except Exception as e:
165
+
166
+ return f"Error starting generation thread: {e}"
167
+
168
+
169
+
170
+ outputs = []
171
+
172
+ for text in streamer:
173
+
174
+ outputs.append(text)
175
+
176
+ return "".join(outputs)
177
+
178
+
179
+
180
+ if __name__ == "__main__":
181
+
182
+ question = "Can you explain briefly to me what is the Python programming language?"
183
+
184
+ print(f"User Question: {question}")
185
+
186
+
187
+
188
+ response = generate_response(question)
189
+
190
+ print(f"Bot Response: {response}")
191
+ ```