Jan-v1: Advanced Agentic Language Model
Overview
Jan-v1 is the first release in the Jan Family, designed for agentic reasoning and problem-solving within the Jan App. Based on our Lucy model, Jan-v1 achieves improved performance through model scaling.
Jan-v1 uses the Qwen3-4B-thinking model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks.
Performance
Question Answering (SimpleQA)
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy.
The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.
Chat Benchmarks
These benchmarks evaluate the model's conversational and instructional capabilities.
Quick Start
Integration with Jan App
Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.
Local Deployment
Using vLLM:
vllm serve janhq/Jan-v1-4B \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
Using llama.cpp:
llama-server --model Jan-v1-4B-Q4_K_M.gguf \
--host 0.0.0.0 \
--port 1234 \
--jinja \
--no-context-shift
Recommended Parameters
temperature: 0.6
top_p: 0.95
top_k: 20
min_p: 0.0
max_tokens: 2048
π€ Community & Support
- Discussions: HuggingFace Community
- Jan App: Learn more about the Jan App at jan.ai
π Citation
Updated Soon
- Downloads last month
- 59
Model tree for janhq/Jan-v1-4B
Base model
Qwen/Qwen3-4B-Thinking-2507