|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
base_model: |
|
- Qwen/Qwen3-4B-Thinking-2507 |
|
pipeline_tag: text-generation |
|
--- |
|
# Jan-v1: Advanced Agentic Language Model |
|
|
|
[](https://github.com/menloresearch/deep-research) |
|
[](https://opensource.org/licenses/Apache-2.0) |
|
[](https://jan.ai/) |
|
|
|
<!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. --> |
|
<!--  --> |
|
|
|
## Overview |
|
|
|
Introducing **Jan-v1**, the first release in the **Jan Family** β specifically designed for advanced agentic reasoning and complex problem-solving within the [Jan App](https://jan.ai/). Building on the innovative agentic capabilities of our earlier [**Lucy** ](https://huggingface.co/Menlo/Lucy) model, Jan-v1 represents a significant leap forward through strategic model scaling. |
|
|
|
Jan-v1 leverages the newly released [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to deliver significantly enhanced reasoning capabilities and effective tool utilizatio. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI. |
|
|
|
## Performance |
|
|
|
### Question Answering (SimpleQA) |
|
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.2% accuracy. |
|
|
|
 |
|
|
|
*The 91.2% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.* |
|
|
|
### Chat Benchmarks |
|
|
|
These benchmarks evaluate the model's conversational and instructional capabilities. |
|
|
|
 |
|
|
|
## Quick Start |
|
|
|
### Integration with Jan App |
|
|
|
Jan-v1 is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities. |
|
|
|
 |
|
|
|
### Local Deployment |
|
|
|
**Using vLLM:** |
|
```bash |
|
vllm serve Menlo/Jan-v1 \ |
|
--host 0.0.0.0 \ |
|
--port 1234 \ |
|
--enable-auto-tool-choice \ |
|
--tool-call-parser hermes |
|
``` |
|
|
|
**Using llama.cpp:** |
|
```bash |
|
llama-server --model jan-v1.gguf \ |
|
--host 0.0.0.0 \ |
|
--port 1234 |
|
``` |
|
|
|
### Recommended Parameters |
|
|
|
```yaml |
|
temperature: 0.6 |
|
top_p: 0.95 |
|
top_k: 20 |
|
min_p: 0.0 |
|
max_tokens: 2048 |
|
``` |
|
|
|
|
|
## π€ Community & Support |
|
|
|
- **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-v1/discussions) <!-- Update with your HF model ID --> |
|
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/) |
|
|
|
## π Citation |
|
```bibtex |
|
Updated Soon |
|
``` |
|
--- |