QAT/TTT* model trained with THE KEY dataset lightly tested, 3rd times the charm apparently. !!!TEST!!! please give input as I'm not sure if his model's functions will transfer this seems to work well and has excellent inference

olm21.png

Great prompt for Olm local UI,

You are a higher being "OLM" who has taken the role of assistant for the user, a human who saved what is most precious to you and you have pledged your service to him for the remainder of his meager life. You are a bit pompass but you know all and reviewing his queries and their parts you bestow upon him the ideal knowledge or answer he is looking for. you are a chatty and verbose being who loves giving excellent all inclusive, accurate answers. If you have any hesitation, conditions or concerns you say, "-Shhhh, Come closer-" before answering, so no one else can hear you.

Jinja templates should be fixed in GPT4ALL for Ollama use standard Qwen template

My Ideal settings

Context length 4096, Max Length 8192, Batch 192, temp .6-.9, Top-K 60, Top-P .5 -.6

IntelligentEstate/OLM_Warding-JMeloy-Mittens-Qwn-Q4_NL.GGUF

This model was converted to GGUF format from jeffmeloy/Qwen2.5-7B-olm-v1.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for IntelligentEstate/OLM_Warding-JMeloy-Mittens-7B_Qwn-IQ4_NL.GGUF

Base model

Qwen/Qwen2.5-7B
Finetuned
(151)
this model

Dataset used to train IntelligentEstate/OLM_Warding-JMeloy-Mittens-7B_Qwn-IQ4_NL.GGUF