Mungert commited on
Commit
20e7fc8
Β·
verified Β·
1 Parent(s): 9bc56e4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -165,7 +165,7 @@ These models are optimized for **extreme memory efficiency**, making them ideal
165
  # <span id="testllm" style="color: #7F7FFF;">πŸš€ If you find these models useful</span>
166
  ❀ **Please click "Like" if you find this useful!**
167
  Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
168
- πŸ‘‰ [Free Network Monitor](https://readyforquantum.com)
169
 
170
  πŸ’¬ **How to test**:
171
  1. Click the **chat icon** (bottom right on any page)
@@ -191,7 +191,7 @@ I’m pushing the limits of **small open-source models for AI network monitoring
191
  🟒 **TurboLLM** – Uses **gpt-4-mini** for:
192
  - **Real-time network diagnostics**
193
  - **Automated penetration testing** (Nmap/Metasploit)
194
- - πŸ”‘ Get more tokens by [downloading our Free Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
195
 
196
  πŸ”΅ **HugLLM** – Open-source models (β‰ˆ8B params):
197
  - **2x more tokens** than TurboLLM
@@ -202,10 +202,10 @@ I’m pushing the limits of **small open-source models for AI network monitoring
202
  1. `"Give me info on my websites SSL certificate"`
203
  2. `"Check if my server is using quantum safe encyption for communication"`
204
  3. `"Run a quick Nmap vulnerability test"`
205
- 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
206
 
207
  ### Final word
208
- I fund the servers to create the models files, run the Free Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Free Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) .
209
  This will help me pay for the services and increase the token limits for everyone.
210
 
211
  Thank you :)
 
165
  # <span id="testllm" style="color: #7F7FFF;">πŸš€ If you find these models useful</span>
166
  ❀ **Please click "Like" if you find this useful!**
167
  Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
168
+ πŸ‘‰ [Quantum Network Monitor](https://readyforquantum.com)
169
 
170
  πŸ’¬ **How to test**:
171
  1. Click the **chat icon** (bottom right on any page)
 
191
  🟒 **TurboLLM** – Uses **gpt-4-mini** for:
192
  - **Real-time network diagnostics**
193
  - **Automated penetration testing** (Nmap/Metasploit)
194
+ - πŸ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
195
 
196
  πŸ”΅ **HugLLM** – Open-source models (β‰ˆ8B params):
197
  - **2x more tokens** than TurboLLM
 
202
  1. `"Give me info on my websites SSL certificate"`
203
  2. `"Check if my server is using quantum safe encyption for communication"`
204
  3. `"Run a quick Nmap vulnerability test"`
205
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
206
 
207
  ### Final word
208
+ I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) .
209
  This will help me pay for the services and increase the token limits for everyone.
210
 
211
  Thank you :)