Jan-v1-4B / README.md
jan-hq's picture
Update README.md
5783e79 verified
|
raw
history blame
4.61 kB
metadata
license: apache-2.0
language:
  - en
base_model:
  - Qwen/Qwen3-4B-Thinking-2507
pipeline_tag: text-generation

Jan-v1: The Inaugural Model of the Jan Family – Redefining Agentic Reasoning

GitHub License Jan App # Adding a badge for Jan App

Jan-v1-Demo-Image

Authors: Alan Dao, Bach Vu Dinh, Alex Nguyen, Norapat Buppodom # Assuming same authors, adjust if needed.

Overview

Introducing Jan-v1, the foundational model in the Jan Family – a new lineage of highly capable language models developed to power the next generation of intelligent agents within the Jan App ecosystem. Building on the innovative agentic capabilities of our earlier Lucy model, Jan-v1 represents a significant leap forward through strategic model scaling.

By leveraging a larger Qwen3-4B base, Jan-v1 demonstrates profoundly enhanced 'thinking' and reasoning capabilities. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI.

What Jan-v1 Excels At

  • 🧠 Enhanced Agentic Reasoning: With its larger parameter count, Jan-v1 excels at deeper reasoning, complex problem-solving, and sophisticated multi-step agentic planning.
  • 🎯 Superior Question Answering: Achieves an impressive 91.2% accuracy on SimpleQA, significantly advancing performance for factoid question answering.
  • πŸ” Advanced Agentic Web Search: Inherits and refines Lucy's strong capabilities for agentic web search and lightweight browsing via MCP-enabled tools.
  • πŸ“± Optimized for Jan App: Specifically engineered to provide unique and highly optimized support for the Jan App, ensuring seamless integration and superior user experience.

Evaluation

Jan-v1's strategic scaling has resulted in a notable performance uplift, particularly evident in its "thinking" and reasoning prowess. Following the established MCP benchmark methodology, Jan-v1 sets a new standard for models in its class.

Model SimpleQA Accuracy
Jan-v1 (Qwen3-4B) 91.2%
Lucy (Qwen3-1.7B) [Lucy's Score]
DeepSeek-v3 (Comparison from Lucy) [DeepSeek's Score]

The 91.2% accuracy on SimpleQA underscores Jan-v1's advanced ability to precisely retrieve and synthesize information, showcasing the effectiveness of our model scaling approach for agentic intelligence.

πŸ–₯️ How to Run Locally

Jan-v1 is designed for flexible deployment, compatible with various inference engines including vLLM, llama.cpp, and local applications like Jan and LMStudio. Its integration with search APIs and web browsing tools is facilitated through the MCP (Mobile-Cloud Protocol).

Deployment

Deploy using VLLM:

vllm serve Menlo/Jan-v1 \ # Update with your HF model ID (e.g., Menlo/Jan-v1)
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes 

Or llama-server from llama.cpp:

llama-server ... 

Recommended Sampling Parameters

Temperature: 0.7
Top-p: 0.9
Top-k: 20
Min-p: 0.0

🀝 Community & Support

πŸ“„ Citation

Updated Soon

**Paper **: *Jan-v1