File size: 5,149 Bytes
ee313f7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---

base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
language:
  - en
library_name: transformers
license: apache-2.0
datasets:
  - LSXPrime/ProseFlow-Actions-v1
tags:
  - text-generation
  - instruction
  - proseflow
  - unsloth
  - qwen
  - code-assistant
  - writing-assistant
---


# ProseFlow-v1-1.5B-Instruct

**ProseFlow-v1-1.5B-Instruct** is a versatile instruction-tuned model designed to be the local AI engine for the [ProseFlow desktop application](https://github.com/LSXPrime/ProseFlow). This model excels at a wide variety of text-processing and code-related tasks, making it an ideal choice for users who want a high-performance, private, and offline-capable AI assistant integrated into their daily workflow.

This model was fine-tuned from [**Qwen/Qwen2.5-Coder-1.5B-Instruct**](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct), inheriting its strong coding and logical reasoning capabilities, and has been further specialized to follow the specific, structured prompts used by ProseFlow "Actions".

The model was fine-tuned on the [**ProseFlow-Actions-v1**](https://huggingface.co/datasets/LSXPrime/ProseFlow-Actions-v1) dataset.

## Model Description

ProseFlow is a universal AI text processor that works via global hotkeys. Users create "Actions" – reusable instructions for the AI – to perform tasks like proofreading, summarizing, refactoring code, or changing the tone of a text. This model is the brain that executes those instructions.

**`ProseFlow-v1-1.5B-Instruct`** is the recommended and primary local model for the application. It strikes an excellent balance between performance, resource requirements, and capability.

### Key Strengths

Based on comprehensive evaluations against the `ProseFlow-Actions-v1` dataset, this model demonstrates:

*   **Excellent Task Comprehension:** The model consistently understands the *intent* behind a wide variety of instructions, from simple text manipulation to complex business and logical tasks.
*   **Superior Code Intelligence:** Thanks to its Qwen2.5-Coder base, the model is exceptionally proficient at code-related tasks like explaining code snippets, finding bugs, refactoring for efficiency, and adding comments.
*   **High-Quality Text Generation:** It produces coherent, high-quality output for summarization, expansion, and creative writing prompts.
*   **Strong Reasoning Capabilities:** The model can successfully solve multi-step word problems and perform logical deductions required by more advanced actions.
*   **Versatility:** It performs reliably across dozens of distinct tasks, including sentiment analysis, data extraction (JSON conversion), and professional email drafting.

### Intended Use

This model is primarily intended to be used within the **ProseFlow desktop application**. Its prompt format and output style are specifically tuned to work seamlessly with the app's "Action" system.

When used in ProseFlow, it provides a powerful, private, and offline alternative to cloud-based AI services.

**Primary Use Cases:**
*   Code refactoring, debugging, and documentation.
*   Drafting and improving professional emails and documents.
*   Summarizing long articles or meeting notes.
*   Proofreading and enhancing creative or technical writing.
*   Brainstorming ideas and generating structured content.

### Limitations and Considerations

While highly capable, this model has a few known behaviors:

*   **Instruction Following vs. Helpfulness:** The model is so well-aligned to be a helpful assistant that it sometimes adds conversational headers or brief explanations to its output (e.g., "Here is the refactored code:"). This violates the strict "output only" constraint in some of the training prompts. In the context of the ProseFlow application, this is generally a minor issue, but it's a deviation from the prompt instructions.
*   **Complex Logic:** While it can handle multi-step logic, it may fail on more abstract or tricky logical puzzles (e.g., identifying an anomaly in a nuanced list).

## How to Use in ProseFlow

1.  [Download and install the ProseFlow application](https://github.com/LSXPrime/ProseFlow/releases).
2.  Navigate to the **Providers -> Local Provider** tab.
3.  Click "Manage Models..." and download `ProseFlow-v1-1.5B-Instruct` from the "Available for Download" list.
4.  Once downloaded, select it from the "My Models" list.
5.  Set your "Primary Service Type" in ProseFlow to **Local**.
6.  You're all set! The application will now use this model for all AI actions.

## Training Details

*   **Base Model:** [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct)
*   **Dataset:** [LSXPrime/ProseFlow-Actions-v1](https://huggingface.co/datasets/LSXPrime/ProseFlow-Actions-v1)
*   **Fine-tuning Library:** [Unsloth](https://github.com/unslothai/unsloth)
*   **Fine-tuning Method:** Supervised fine-tuning on a dataset of structured instruction-input-output triplets.

## License

This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).