AlicanKiraz0 commited on
Commit
44468a6
·
verified ·
1 Parent(s): 8906ccc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -51
README.md CHANGED
@@ -1,51 +1,44 @@
1
- ---
2
- base_model: AlicanKiraz0/SenecaLLM-x-QwQ-32B
3
- license: apache-2.0
4
- tags:
5
- - llama-cpp
6
- - gguf-my-repo
7
- ---
8
-
9
- # AlicanKiraz0/SenecaLLM-x-QwQ-32B-Q8_0-GGUF
10
- This model was converted to GGUF format from [`AlicanKiraz0/SenecaLLM-x-QwQ-32B`](https://huggingface.co/AlicanKiraz0/SenecaLLM-x-QwQ-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
- Refer to the [original model card](https://huggingface.co/AlicanKiraz0/SenecaLLM-x-QwQ-32B) for more details on the model.
12
-
13
- ## Use with llama.cpp
14
- Install llama.cpp through brew (works on Mac and Linux)
15
-
16
- ```bash
17
- brew install llama.cpp
18
-
19
- ```
20
- Invoke the llama.cpp server or the CLI.
21
-
22
- ### CLI:
23
- ```bash
24
- llama-cli --hf-repo AlicanKiraz0/SenecaLLM-x-QwQ-32B-Q8_0-GGUF --hf-file senecallm-x-qwq-32b-q8_0.gguf -p "The meaning to life and the universe is"
25
- ```
26
-
27
- ### Server:
28
- ```bash
29
- llama-server --hf-repo AlicanKiraz0/SenecaLLM-x-QwQ-32B-Q8_0-GGUF --hf-file senecallm-x-qwq-32b-q8_0.gguf -c 2048
30
- ```
31
-
32
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
33
-
34
- Step 1: Clone llama.cpp from GitHub.
35
- ```
36
- git clone https://github.com/ggerganov/llama.cpp
37
- ```
38
-
39
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
40
- ```
41
- cd llama.cpp && LLAMA_CURL=1 make
42
- ```
43
-
44
- Step 3: Run inference through the main binary.
45
- ```
46
- ./llama-cli --hf-repo AlicanKiraz0/SenecaLLM-x-QwQ-32B-Q8_0-GGUF --hf-file senecallm-x-qwq-32b-q8_0.gguf -p "The meaning to life and the universe is"
47
- ```
48
- or
49
- ```
50
- ./llama-server --hf-repo AlicanKiraz0/SenecaLLM-x-QwQ-32B-Q8_0-GGUF --hf-file senecallm-x-qwq-32b-q8_0.gguf -c 2048
51
- ```
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - Qwen/QwQ-32B
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ tags:
8
+ - cybersecurity
9
+ - ethicalhacking
10
+ - informationsecurity
11
+ - pentest
12
+ - code
13
+ - applicationsecurity
14
+ ---
15
+
16
+ <img src="https://huggingface.co/AlicanKiraz0/SenecaLLM-x-QwQ-32B-Q4_K_M-GGUF/resolve/main/QwQ-32B_x_Seneca_v1.4.png" width="1000" />
17
+
18
+ Finetuned by Alican Kiraz
19
+
20
+ [![Linkedin](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://tr.linkedin.com/in/alican-kiraz)
21
+ ![X (formerly Twitter) URL](https://img.shields.io/twitter/url?url=https%3A%2F%2Fx.com%2FAlicanKiraz0)
22
+ ![YouTube Channel Subscribers](https://img.shields.io/youtube/channel/subscribers/UCEAiUT9FMFemDtcKo9G9nUQ)
23
+
24
+ Links:
25
+ - Medium: https://alican-kiraz1.medium.com/
26
+ - Linkedin: https://tr.linkedin.com/in/alican-kiraz
27
+ - X: https://x.com/AlicanKiraz0
28
+ - YouTube: https://youtube.com/@alicankiraz0
29
+
30
+ With the release of the new Qwen QwQ-32B, I quickly began training SenecaLLM v1.4 based on this model. During training:
31
+ * About 30 hours on BF16 with 4×H200
32
+
33
+
34
+ **It does not pursue any profit.**
35
+
36
+ With the new dataset I’ve prepared, it can produce quite good outputs in the following areas:
37
+ * Information Security v1.5
38
+ * Incident Response v1.3.1
39
+ * Threat Hunting v1.3.2
40
+ * Ethical Exploit Development v2.0
41
+ * Purple Team Tactics v1.3
42
+ * Reverse Engineering v2.0
43
+
44
+ "Those who shed light on others do not remain in darkness..."