aashish1904 commited on
Commit
b13589f
·
verified ·
1 Parent(s): 9b662fd

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +217 -0
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: creativeml-openrail-m
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ language:
8
+ - en
9
+ base_model:
10
+ - meta-llama/Llama-3.2-3B-Instruct
11
+ tags:
12
+ - codepy
13
+ - safetensors
14
+ - ollama
15
+ - llama-cpp
16
+ - trl
17
+ - deep-think
18
+ - coder
19
+
20
+ ---
21
+
22
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
23
+
24
+
25
+ # QuantFactory/Codepy-Deepthink-3B-GGUF
26
+ This is quantized version of [prithivMLmods/Codepy-Deepthink-3B](https://huggingface.co/prithivMLmods/Codepy-Deepthink-3B) created using llama.cpp
27
+
28
+ # Original Model Card
29
+
30
+ # **Codepy 3B Deep Think Model File**
31
+
32
+ The **Codepy 3B Deep Think Model** is a fine-tuned version of the **meta-llama/Llama-3.2-3B-Instruct** base model, designed for text generation tasks that require deep reasoning, logical structuring, and problem-solving. This model leverages its optimized architecture to provide accurate and contextually relevant outputs for complex queries, making it ideal for applications in education, programming, and creative writing.
33
+
34
+ With its robust natural language processing capabilities, **Codepy 3B Deep Think** excels in generating step-by-step solutions, creative content, and logical analyses. Its architecture integrates advanced understanding of both structured and unstructured data, ensuring precise text generation aligned with user inputs.
35
+
36
+ | **Model Content** | **Size** | **Description** | **Upload Status** |
37
+ |-----------------------------------|----------------|------------------------------------------------|-------------------|
38
+ | `.gitattributes` | 1.57 kB | Git LFS configuration for large files. | Uploaded |
39
+ | `README.md` | 221 Bytes | Basic repository information. | Updated |
40
+ | `config.json` | 994 Bytes | Model configuration with architectural details. | Uploaded |
41
+ | `generation_config.json` | 248 Bytes | Default generation parameters. | Uploaded |
42
+ | `pytorch_model-00001-of-00002.bin`| 4.97 GB | Sharded PyTorch model weights (part 1 of 2). | Uploaded (LFS) |
43
+ | `pytorch_model-00002-of-00002.bin`| 1.46 GB | Sharded PyTorch model weights (part 2 of 2). | Uploaded (LFS) |
44
+ | `pytorch_model.bin.index.json` | 21.2 kB | Index file mapping model shards. | Uploaded |
45
+ | `special_tokens_map.json` | 477 Bytes | Maps special tokens to their respective IDs. | Uploaded |
46
+ | `tokenizer.json` | 17.2 MB | Full tokenizer vocabulary and merges. | Uploaded (LFS) |
47
+ | `tokenizer_config.json` | 57.5 kB | Tokenizer configuration details. | Uploaded |
48
+
49
+ | Run with LM Studio | Details |
50
+ |--------------------------|-----------------------------------------------------------------------------------------------|
51
+ | *Run with LM Studio* | https://lmstudio.ai/ |
52
+ | *Demo on LM Studio* | https://drive.google.com/file/d/1CHdfjYrwMnk9ACvS40Abfy3xNXnCubKG/view?usp=sharing |
53
+ | *Codepy-Deepthink-3B-GGUF* | https://huggingface.co/prithivMLmods/Codepy-Deepthink-3B-GGUF |
54
+
55
+ ---
56
+ # **<span style="color:red;">Sample Deepthink Inference</span>**
57
+
58
+ >>> Develop a Python program to generate random passwords that consist of 8 characters. Not applicable
59
+
60
+ ```markdown
61
+ # Random Password Generator
62
+
63
+ This Python program generates a random password of a specified length (default: 8 characters) that includes a mix of lowercase letters, uppercase letters, digits, and spaces. The program ensures the generated password meets basic security requirements while maintaining simplicity.
64
+
65
+ ## Features
66
+ - Generates alphanumeric passwords with optional spaces.
67
+ - Ensures at least one lowercase letter, one uppercase letter, one digit, and one space in the password.
68
+ - Shuffles the characters to randomize the order.
69
+ - Includes error handling for invalid input lengths.
70
+
71
+ ## Requirements
72
+ - Python 3.x
73
+
74
+ ## Usage
75
+ 1. Clone or download this repository.
76
+ 2. Run the script using Python:
77
+
78
+ ```bash
79
+ python password_generator.py
80
+ ```
81
+
82
+ 3. The script will generate and print an 8-character random password to the console.
83
+
84
+ ## Code
85
+
86
+ ```python
87
+ import random
88
+
89
+ def generate_password(length):
90
+ """
91
+ Generates a random alphanumeric password of the specified length.
92
+
93
+ Ensures that at least one lowercase letter, one uppercase letter,
94
+ one digit, and one space are included in the password.
95
+
96
+ Args:
97
+ length: The number of characters in the password.
98
+
99
+ Returns:
100
+ A string representing the generated password or None if the input is invalid.
101
+ """
102
+
103
+ # Define a set of alphanumeric characters with spaces
104
+ characters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 '
105
+
106
+ # Validate the length
107
+ if length < 1:
108
+ return None
109
+
110
+ # Handle invalid length
111
+ if length > len(characters):
112
+ print("Invalid password length. It should be less than or equal to", len(characters))
113
+ return None
114
+
115
+ # Ensure at least one character from each required group
116
+ required_characters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 '
117
+ if length > 1:
118
+ password_length_without_requirements = length - 4
119
+ random_string = ''.join(random.choice(required_characters) for _ in range(password_length_without_requirements))
120
+
121
+ # Fill the rest of the password with random characters
122
+ remaining_chars_needed = length - len(random_string)
123
+ all_possible_chars = list(characters)
124
+ if length > 1:
125
+ random_character = random.choice('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 ')
126
+ else:
127
+ random_character = random.choice('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 ')
128
+
129
+ password = random_string + random_character * remaining_chars_needed
130
+
131
+ # Shuffle the password to avoid predictable patterns
132
+ password_list = list(password)
133
+ random.shuffle(password_list)
134
+ password = ''.join(password_list)
135
+
136
+ return password
137
+
138
+ # Example Usage
139
+ password_length = 8
140
+ generated_password = generate_password(password_length)
141
+
142
+ if generated_password is not None:
143
+ print(f"Generated Password: {generated_password}")
144
+ else:
145
+ print("Failed to generate a password. Please ensure the length is valid (between 1 and", len(characters), ").")
146
+ ```
147
+
148
+ ## Example Output
149
+ ```
150
+ Generated Password: g7x 2PqA
151
+ ```
152
+
153
+ ## Customization
154
+ To customize the password length, modify the `password_length` variable in the script.
155
+
156
+ ## Security Notes
157
+ - This implementation uses Python's `random` module, which is suitable for general-purpose randomness. For cryptographically secure passwords, consider using the `secrets` module.
158
+ - The character set includes spaces for additional complexity, but you can modify the `characters` string to include other symbols (e.g., `!@#$%^&*`).
159
+ ---
160
+ # **Model Architecture**
161
+
162
+ Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
163
+
164
+ # **Run with Ollama [ Ollama Run ]**
165
+
166
+ Ollama simplifies running machine learning models. This guide walks you through downloading, installing, and running GGUF models in minutes.
167
+
168
+ ## Table of Contents
169
+
170
+ - [Download and Install](#download-and-install)
171
+ - [Run GGUF Models](#run-gguf-models)
172
+ - [Running the Model](#running-the-model)
173
+ - [Sample Usage](#sample-usage)
174
+
175
+ ## Download and Install
176
+
177
+ Download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system.
178
+
179
+ ## Run GGUF Models
180
+
181
+ 1. **Create the Model File**
182
+ Create a model file, e.g., `metallama`.
183
+
184
+ 2. **Add the Template Command**
185
+ Include a `FROM` line in the file to specify the base model:
186
+ ```bash
187
+ FROM Llama-3.2-1B.F16.gguf
188
+ ```
189
+
190
+ 3. **Create and Patch the Model**
191
+ Run the following command:
192
+ ```bash
193
+ ollama create metallama -f ./metallama
194
+ ```
195
+ Verify the model with:
196
+ ```bash
197
+ ollama list
198
+ ```
199
+
200
+ ## Running the Model
201
+
202
+ Run your model with:
203
+ ```bash
204
+ ollama run metallama
205
+ ```
206
+
207
+ ### Sample Usage
208
+
209
+ Interact with the model:
210
+ ```plaintext
211
+ >>> write a mini passage about space x
212
+ Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration...
213
+ ```
214
+
215
+ ---
216
+
217
+ With these steps, you can easily run custom models using Ollama. Adjust as needed for your specific use case.