Winners create more winners, while losers do the opposite.

Success is a game of winners.

— # Leroy Dyer (1972-Present)

VERY LONG RESPONSES !

The Human AI .

New genrea of AI ! This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling : This latest model has been trained on Conversations with a desire to respond with expressive emotive content , As well as discussions on various topics: It has also been focused on conversations by human interactions. hence there maybe NFSW contet in the model : This has no way inhibited its other tasks which were also aligned using the new intensive and Expressive prompt :

Thinking Humanly:

AI aims to model human thought, a goal of cognitive science across fields like psychology and computer science.

Thinking Rationally:

AI also seeks to formalize “laws of thought” through logic, though human thinking is often inconsistent and uncertain.

Acting Humanly:

Turing's test evaluates AI by its ability to mimic human behavior convincingly, encompassing skills like reasoning and language.

Acting Rationally:

Russell and Norvig advocate for AI that acts rationally to achieve the best outcomes, integrating reasoning and adaptability to environments.

SpydazWeb AI (7b Mistral) (512k)

This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage : the long context aspect also allows fro advanced projects and sumarys as well as image and audio translationns and generations:

Highly trained as well as methodolgy oriented , this model has been trained on the reAct Prcess and other structured processes . hence structured outputs (json) are very highly trained as well as orchestration of other agents and tasks : the model has been trained for tools use as well as funtion use : as well as custom processes and tools : some tools do not need code either as thier implication means the model may even generate a tool or artifct to perfrom the task :

I have found that its response can be nearly endless in length , i personally limit the output token to 2048, but the response can still overflow across respones if required ! The model often discusses with itself - even running tests - and slowly building a final answer : truly its thoughts are out in the open now !

i will be retraining soon - to seperate the thinking from the final output ! previously the thoughts were seperate , as the output can be structured with the thining in tags ; But is this really what we want is it only mimicking its chain of thought previoulsy trained ? this model was trained with human interaction so some task can only be performed together . but this model will remain the deepthinker ! or OpenThinker Untagged !

Downloads last month
30
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support