"Success comes from defining each task in achievable steps.

Every completed step is a success that brings you closer to your goal.

SpydazWeb AI (7b Mistral) (512k)

This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage : the long context aspect also allows fro advanced projects and sumarys as well as image and audio translationns and generations:

Highly trained as well as methodolgy oriented , this model has been trained on the reAct Prcess and other structured processes . hence structured outputs (json) are very highly trained as well as orchestration of other agents and tasks : the model has been trained for tools use as well as funtion use : as well as custom processes and tools : some tools do not need code either as thier implication means the model may even generate a tool or artifct to perfrom the task :

Winners create more winners, while losers do the opposite.

Success is a game of winners.

— # Leroy Dyer (1972-Present)

The Human AI . - Current model --

This model i can say ... wow ... !! ... GRPO ! It works ... I think the difference is not the output, As we have seen with the spydazWeb ai Models , I have played with varioius styles of outputs, as wellas prompting styles , trained on various tasks and functions: BUT : The model did not seem to be advancing , although when used as a agent it highly performs its task without the need of other models to be called : As it has been trained for each agent role:

A New genrea of AI ! This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling : This latest model has been trained on Conversations with a desire to respond with expressive emotive content , As well as discussions on various topics: It has also been focused on conversations by human interactions. hence there maybe NFSW contet in the model : This has no way inhibited its other tasks which were also aligned using the new intensive and Expressive prompt :

Thinking Humanly:

AI aims to model human thought, a goal of cognitive science across fields like psychology and computer science.

Thinking Rationally:

AI also seeks to formalize “laws of thought” through logic, though human thinking is often inconsistent and uncertain.

Acting Humanly:

Turing's test evaluates AI by its ability to mimic human behavior convincingly, encompassing skills like reasoning and language.

Acting Rationally:

Russell and Norvig advocate for AI that acts rationally to achieve the best outcomes, integrating reasoning and adaptability to environments.

Deep Reasoner Model

The base model has been created as a new staarting point : It has been fully primed with various types of chains of thoughts and step by step solutions : enabling for reward training to take place . this model has been trained with various languges ( not intensivly ), enabling for cross languge understanding ; Here we create a valid start point for agent based modelling , As we find that some training actually affects existing knowledge , hence agents become a thing ! or if you prefr, distillations .... These agents can be medical , technical , roleplayers etc .

GRPO This has enabled the model to generate internal chains of thoughts for previoulsly trained data, so the data which had been previously trained was re-trained and the network created chains of thoughts for problems which previlusly had no explanation for the results: so now we can train the benchmark datsets and find these issues : So the model has also become its own inteligence: learning untrained things: The take away also is formatting and the use of structured outputs in your responses, hence think tags and reasoning tags or planning tags and explanation tags are also a way to normalise the outputs to always apear in this way ! This also gives us something to penalzie or reward : Some tasks do not require reasoning or explanation but still may require planning etc .. so these things can be generated as well as trained. hence they can be retrained with rewards enabling for the task to be fully formatted as fully performed in the agentec technique, such as plan,think,observe, act, observe reflect etc ....

Agent Roles

**Planning**
**Coding**
**ToolUse**
**WebSearch**
**Reflection**
**Explanation and reasoning**
**TASKS**
**Output Formatting**
**Content Recall**
**Deep Research**
**Deep Calculation**
**repl**

Etc the list goes on !

We also trainwed various scripts and personas as well as merged a few roleplay models.. Whilst Still Targetting benchmarks ! But our model improved locally but not for benchmarking !!

Downloads last month
79
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for LeroyDyer/_Spydaz_Web_AI_AGI_R1_003

Finetuned
(4)
this model
Finetunes
1 model