YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quasar-Tiny
This is a distilled version of the Quasar Language Model with Parameter Memory Bank and Liquid Neural Networks, featuring true infinite context window capability.
Model Details
- Model Type: Quasar Language Model (Decoder-Only)
- Size: ~200M parameters
- Training: Knowledge distilled from Qwen/Qwen3-0.6B
- Dataset: eyad-silx/Small-QuasarDataset
- Step: 1
Architecture
The Quasar Language Model combines Liquid Neural Networks with a Parameter Memory Bank for unlimited context processing. Key features:
- Decoder-Only Architecture: No positional encoding limitations
- Parameter Memory Bank: Stores and retrieves information from unlimited context
- Liquid Neural Networks: Dynamic state-based processing for better temporal modeling
Usage
from transformers import AutoTokenizer
from quasar_lm_100m import QuasarLM100M
import torch
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("eyad-silx/Quasar-Tiny")
model = QuasarLM100M.from_pretrained("eyad-silx/Quasar-Tiny")
# Generate text
prompt = "The history of artificial intelligence"
prompt_ids = tokenizer.encode(prompt, return_tensors='pt')
output_ids = model.generate(prompt_ids, max_length=100)
generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(generated_text)
- Downloads last month
- 130
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support