Thoth

Model Card: THOTH_R

base_model: AIDC-AI/Marco-o1 tags:

  • text-generation-inference
  • transformers
  • unsloth
  • qwen2
  • trl license: apache-2.0 language:
  • en library_name: transformers

Model Overview

  • Developed by: Daemontatox
  • Base Model: AIDC-AI/Marco-o1
  • License: Apache-2.0

The THOTH_R model is a Qwen2-based large language model (LLM) optimized for high-performance text generation tasks. With its streamlined training process and efficient architecture, THOTH_R is a reliable solution for diverse applications requiring natural language understanding and generation.


Key Features

  • Accelerated Training:

    • Trained 2x faster with Unsloth, a robust training optimization framework.
    • Integrated with Hugging Face's TRL (Transformers Reinforcement Learning) library, enhancing its task-specific adaptability.
  • Primary Use Cases:

    • Text generation
    • Creative content creation
    • Dialogue and conversational AI systems
    • Question-answering systems

Acknowledgements

The fine-tuning of THOTH_R was accomplished with love and precision using Unsloth.

Unsloth

For collaboration, feedback, or contributions, visit the repository or connect with the developers.

Downloads last month
14
Safetensors
Model size
7.62B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Daemontatox/THOTH_R

Quantizations
1 model