Support us On KO-FI ko-fi

image/png ZoraBetaA family

ZoraBetaA2 - EmpressOfRoleplay

  • ZoraBetaA2 is Our Brand new AI Model, finetuned from Our A1 model using Iris-Uncensored-Reformat-R2 with higher step, ZoraBetaA2 showcase a Strong Roleplaying Capability With an Even Stronger Finetuned Bias toward Roleplaying Using Zephyr Beta 7B, ZoraBetaA2 also Shows a Great Roleplaying Capabilities, Without Hallucinating Much Unlike MistThena7B Finetuned Using Mistral 7b v0.1, ZoraBetaA2 Is A Much Biased To Roleplay AI Model, compared the A1 model, due to this, the A2 Model failed at Other Purpose Beyond Roleplaying This New Architecture allow us To Increase Roleplaying capabilities without Doing everything from scratch as Zephyr Beta has a Strong RP foundation already, Leading us to Scaffolding on this Architecture And Increasing Roleplaying capabilities further.

  • ZoraBetaA2 contains Cleaned Dataset, however its still relatively Unstable so please Report any issues found through our email [email protected] about any overfitting, or improvements for the future Models Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its Dataset, then please handle it with care and ethical considerations

  • ZoraBetaA2 is

    • Developed by: N-Bot-Int
    • License: apache-2.0
    • Parent Model from model: HuggingFaceH4/zephyr-7b-beta
    • Dataset Combined Using: UltraDatasetCleanerAndMoshpit-R1(Propietary Software)
  • Notice

    • For a Good Experience, Please use
      • Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
  • Detail card:

    • Parameter

      • 7 Billion Parameters
      • (Please visit your GPU Vendor if you can Run 3B models)
    • Training

      • 300 Steps from Iris-Dataset-Reformat-R1
    • Finetuning tool:

    • Unsloth AI

      • This Zephyr model was trained 2x faster with Unsloth and Huggingface's TRL library.
    • Fine-tuned Using:

    • Google Colab

Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for N-Bot-Int/ZoraBetaA2

Adapter
(426)
this model
Quantizations
1 model

Collection including N-Bot-Int/ZoraBetaA2