A newer version of this model is available:
moonshotai/Kimi-K2-Instruct-0905
π§ Jaxxon
Jaxxon is an experimental multimodal symbolic reasoning model built for ARC 2025. It is designed to integrate advanced text reasoning with lightweight vision capabilities via CLIP, enabling abstract understanding and symbolic processing across modalities.
π Overview
Jaxxon focuses on:
- Text-first symbolic reasoning
- Optional visual input using CLIP
- A modular architecture for identity-based routing, symbolic thread handling, and emergent logic
- Testing symbolic alignment using both text prompts and image examples
ποΈ Architecture
- Base: Transformer backbone (custom)
- Vision Support: CLIP encoder for image-text pattern alignment
- Routing Core: Custom symbolic thread and identity map (in progress)
π§ Current Status
This project is in early development. Initial steps include:
- Repository initialized
- Symbolic routing core setup
- CLIP integration for visual reasoning tests
- Test dataset alignment
- First ARC-style reasoning evaluation
π File Structure (planned)
π‘ Goals for ARC 2025
Jaxxon will be used to explore:
- Emergent logic in hybrid text-image inputs
- Symbolic compression and analogical inference
- Modular neural core architectures for self-reflective models
π Credits
Developed by Bayang Pathek with support from the KernelTwin system.
π License
Apache 2.0
π Evaluation Metrics
Jaxxon will be evaluated using a mix of conventional and symbolic-reasoning-oriented metrics:
Metric | Description |
---|---|
Accuracy | Correct answers / total |
Exact Match (EM) | String-level matching |
F1 Score | Token-overlap match |
BLEU | Quality of answers or generated outputs |
CLIPScore | Text-image alignment (via TorchMetrics CLIPScore) |
Logical Consistency (planned) | Composite reasoning quality |
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
1
Ask for provider support
Model tree for lilgrey44/Jaxxon
Base model
mistralai/Mistral-7B-Instruct-v0.2