Optrix-1

Optrix-1 is a base 1 billion parameter language model developed by SVECTOR for general-purpose language generation and understanding. Pretrained on a broad corpus, it provides a strong foundation for fine-tuning on tasks such as summarization, dialogue, and retrieval.

Key Features

  • 1B parameter transformer architecture
  • Pretrained on a broad, diverse corpus
  • Optimized for efficient inference and low memory usage
  • Suitable for fine-tuning on a wide range of language tasks

Usage

With Hugging Face Transformers

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("SVECTOR-CORPORATION/Optrix-1")
model = AutoModelForCausalLM.from_pretrained("SVECTOR-CORPORATION/Optrix-1")

inputs = tokenizer("What is AI?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

With llama.cpp (GGUF Format)

./main -m Optrix-1 -p "What is AI?"

Ensure the model is available in GGUF format before inference.


Model Specifications

Developer: SVECTOR
Architecture: Custom transformer with Grouped-Query Attention (GQA)
Embedding Dimension: 2048
Layers: 16
Attention Heads: 32
Vocabulary Size: 128,256
Max Position Embeddings: 131,072
Positional Encoding: Rotary with dynamic scaling
Activation Function: GELU
Output Head: Tied linear projection
Languages: English, German, French, Spanish, Hindi, Portuguese, Thai, Italian, and others
Release Date: June 27, 2025


Architecture Overview

  • Embedding Layer: nn.Embedding(vocab_size, hidden_size)

  • Transformer Block (×16):

    • nn.MultiheadAttention(batch_first=True)
    • 2-layer MLP with GELU activation
    • LayerNorm (pre-attention and pre-MLP)
  • Final LayerNorm

  • Output Layer: nn.Linear(hidden_size, vocab_size, bias=False)

  • Causal Masking: Left-to-right for autoregressive generation

  • Rotary Embeddings: Applied with dynamic scaling


Example Configuration

{
  "architectures": ["OptrixForCausalLM"],
  "hidden_size": 2048,
  "num_hidden_layers": 16,
  "num_attention_heads": 32,
  "vocab_size": 128256,
  "max_position_embeddings": 131072,
  "model_type": "optrix-1"
}

License & Contact

Use of this model is governed by the SVECTOR License. For inquiries, please contact SVECTOR.

Downloads last month
1
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support