AetherDrake-SFT
- Developed by: Daemontatox
- License: Apache 2.0
- Finetuned Using: Unsloth, Hugging Face Transformers, and TRL Library
Model Overview
The AetherDrake-SFT Model is an advanced AI system optimized for logical reasoning, multi-step problem-solving, and decision-making tasks. Designed with efficiency and accuracy in mind, it employs a structured system prompt to ensure high-quality answers through a transparent and iterative thought process.
System Prompt and Workflow
This model operates using an innovative reasoning framework structured around the following steps:
Initial Thought:
The model uses<Thinking>
tags to reason step-by-step and craft its best possible response.
Example:Self-Critique:
It evaluates its initial response within<Critique>
tags, focusing on:
- Accuracy: Is it factually correct and verifiable?
- Clarity: Is it clear and free of ambiguity?
- Completeness: Does it fully address the request?
- Improvement: What can be enhanced?
Example:
Revision:
Based on the critique, the model refines its response within<Revising>
tags.
Example:Final Response:
The revised response is presented clearly within<Final>
tags.
Example:Tag Innovation:
When needed, the model creates and defines new tags for better structuring or clarity, ensuring consistent usage.
Example:
Key Features
- Structured Reasoning: Transparent, multi-step approach for generating and refining answers.
- Self-Improvement: Built-in critique and revision ensure continuous response enhancement.
- Clarity and Adaptability: Tagging system provides organized, adaptable responses tailored to user needs.
- Creative Flexibility: Supports dynamic problem-solving with the ability to introduce new tags and concepts.
Use Cases
The model is designed for various domains, including:
- Research and Analysis: Extracting insights and providing structured explanations.
- Education: Assisting with tutoring by breaking down complex problems step-by-step.
- Problem-Solving: Offering logical and actionable solutions for multi-step challenges.
- Content Generation: Producing clear, well-organized creative or professional content.
Training Details
Frameworks:
Unsloth for accelerated training.
Hugging Face Transformers and the TRL library for reinforcement learning with human feedback (RLHF).
Dataset: Finetuned on diverse reasoning-focused tasks, including logical puzzles, mathematical problems, and commonsense reasoning scenarios.
Hardware Efficiency:
Trained with bnb-4bit precision for reduced memory usage.
Optimized training pipeline achieving 2x faster development cycles.
Limitations
- Arithmetic Equations Model might hallucinate in the middle of thinking and using Arithmetic Equations as it wasn't trained on latex equations.
- Very Complex problems Model has a tendency to get side tracked when asked long and complex problems and might answer with uncertainty.
Ethical Considerations
- Transparency: Responses are structured for verifiability through tagging.
- Bias Mitigation: Includes self-critique to minimize biases and ensure fairness.
- Safe Deployment: Users are encouraged to evaluate outputs to prevent harm or misinformation.
License
This model is distributed under the Apache 2.0 license, allowing users to use, modify, and share it in compliance with the license terms.
Acknowledgments
Special thanks to:
- Unsloth for accelerated training workflows.
- Hugging Face for their powerful tools and libraries.
Experience the AetherDrake-SFT, leveraging its structured reasoning and self-improvement capabilities for any task requiring advanced AI reasoning.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | % Value |
---|---|
Avg. | 22.84 |
IFEval (0-Shot) | 48.13 |
BBH (3-Shot) | 27.14 |
MATH Lvl 5 (4-Shot) | 14.65 |
GPQA (0-shot) | 9.40 |
MuSR (0-shot) | 9.97 |
MMLU-PRO (5-shot) | 27.77 |
- Downloads last month
- 28
Model tree for Daemontatox/AetherDrake-SFT
Base model
meta-llama/Llama-3.1-8BDataset used to train Daemontatox/AetherDrake-SFT
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard48.130
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard27.140
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard14.650
- acc_norm on GPQA (0-shot)Open LLM Leaderboard9.400
- acc_norm on MuSR (0-shot)Open LLM Leaderboard9.970
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard27.770