Excited to Share My New Project: An Arabic Logical Reasoning AI Model

#2
by loaiabdalslam - opened
beetleware org

Okay, here's a draft for a post you could use to share your work with the tech community, written in a genuine, personal style, and in English:

Suggested Post Title: πŸš€
Screenshot 2025-05-21 165052.png
Screenshot 2025-05-21 165209.png
Screenshot 2025-05-21 165401.png
Screenshot 2025-05-21 165630.png
Screenshot 2025-05-21 171541.png
! πŸ§ πŸ’‘

Post Body:

Hey everyone,

After a lot of work and experimentation, I'm super excited to finally share something I've been passionately working on! I've successfully fine-tuned a Qwen model (specifically based on unsloth/Qwen3-14B) to be really good at logical reasoning and arithmetic, all in Arabic. This model, which I've named Bee1reason-arabic-Qwen-14B, thinks and responds entirely in Arabic.

The idea came to me when I noticed a gap in models that can handle complex logical inference in Arabic, especially when you need to understand the "thinking steps" behind an answer. That's why I focused the fine-tuning on a custom dataset (beetlware/arabic-reasoning-dataset-logic) packed with Arabic logical problems (deduction, induction, abduction). The goal was to create a model that doesn't just give an answer, but can also "show its work" using ... tags before delivering the final Arabic response.

It's built using the awesome Unsloth library for efficient LoRA fine-tuning, and the final version I've pushed is a merged 16-bit model, so it's ready to use directly.

Here's a quick rundown of what it can do:

Solves logical puzzles in Arabic.
Performs arithmetic calculations described in Arabic.
Can (based on its training) output its reasoning steps before the answer.
Fully conversational in Arabic for these tasks.
I've put a lot of effort into the data preparation and fine-tuning process to make it robust. You can check it out on the Hugging Face Hub:
➑️ beetlware/Bee1reason-arabic-Qwen-14B

I've also included a detailed model card with usage examples (including how to run it with Transformers and VLLM for those interested in scaled inference).

I'm really keen to see what the community thinks and how this might be useful for others working on Arabic NLP or AI reasoning. Would love to get your feedback, hear your thoughts, or see if you find interesting use cases for it! Let me know if you try it out.

Big thanks to the Unsloth team for their great library that made the training process so much smoother!

#AI #LLM #ArabicNLP #Qwen #FineTuning #LogicalReasoning #MachineLearning #HuggingFace #Unsloth #OpenSource

Sign up or log in to comment