prithivMLmods commited on
Commit
46c0e35
·
verified ·
1 Parent(s): b6cd230

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -1
README.md CHANGED
@@ -12,4 +12,68 @@ tags:
12
  - jolt
13
  ---
14
 
15
- ![lkaSDRhgn.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/bdg2qYoDlP6m2SCnM3hXp.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - jolt
13
  ---
14
 
15
+ ![lkaSDRhgn.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/bdg2qYoDlP6m2SCnM3hXp.png)
16
+ # **Jolt-v0.1**
17
+
18
+ Jolt-v0.1 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. It has been fine-tuned on a synthetic dataset based on math and cot datasets, further optimizing its chain-of-thought (CoT) reasoning and logical problem-solving abilities. The model demonstrates significant improvements in context understanding, structured data processing, and long-context comprehension, making it ideal for complex reasoning tasks, instruction-following, and text generation.
19
+
20
+ ### **Key Improvements**
21
+ 1. **Enhanced Knowledge and Expertise**: Improved mathematical reasoning, coding proficiency, and structured data processing.
22
+ 2. **Fine-Tuned Instruction Following**: Optimized for precise responses, structured outputs (e.g., JSON), and generating long texts (8K+ tokens).
23
+ 3. **Greater Adaptability**: Better role-playing capabilities and resilience to diverse system prompts.
24
+ 4. **Long-Context Support**: Handles up to **128K tokens** and generates up to **8K tokens** per output.
25
+ 5. **Multilingual Proficiency**: Supports over **29 languages**, including Chinese, English, French, Spanish, Portuguese, German, and more.
26
+
27
+ ### **Quickstart with Transformers**
28
+
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+
32
+ model_name = "prithivMLmods/Jolt-v0.1"
33
+
34
+ model = AutoModelForCausalLM.from_pretrained(
35
+ model_name,
36
+ torch_dtype="auto",
37
+ device_map="auto",
38
+ trust_remote_code=True
39
+ )
40
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
41
+
42
+ prompt = "Give me a short introduction to large language models."
43
+ messages = [
44
+ {"role": "system", "content": "You are an advanced AI assistant with expert-level reasoning and knowledge."},
45
+ {"role": "user", "content": prompt}
46
+ ]
47
+ text = tokenizer.apply_chat_template(
48
+ messages,
49
+ tokenize=False,
50
+ add_generation_prompt=True
51
+ )
52
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
53
+
54
+ generated_ids = model.generate(
55
+ **model_inputs,
56
+ max_new_tokens=512
57
+ )
58
+ generated_ids = [
59
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
60
+ ]
61
+
62
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
63
+ print(response)
64
+ ```
65
+
66
+ ### **Intended Use**
67
+ - **Advanced Reasoning & Context Understanding**: Designed for logical deduction, multi-step problem-solving, and complex knowledge-based tasks.
68
+ - **Mathematical & Scientific Problem-Solving**: Enhanced capabilities for calculations, theorem proving, and scientific queries.
69
+ - **Code Generation & Debugging**: Generates and optimizes code across multiple programming languages.
70
+ - **Structured Data Analysis**: Processes tables, JSON, and structured outputs, making it ideal for data-centric tasks.
71
+ - **Multilingual Applications**: High proficiency in over 29 languages, enabling global-scale applications.
72
+ - **Extended Content Generation**: Supports detailed document writing, research reports, and instructional guides.
73
+
74
+ ### **Limitations**
75
+ 1. **High Computational Requirements**: Due to its **14B parameters** and **128K context support**, it requires powerful GPUs or TPUs for efficient inference.
76
+ 2. **Language-Specific Variability**: Performance may vary across supported languages, especially for low-resource languages.
77
+ 3. **Potential Error Accumulation**: Long-text generation can sometimes introduce inconsistencies over extended outputs.
78
+ 4. **Limited Real-World Awareness**: Knowledge is restricted to training data and may not reflect recent world events.
79
+ 5. **Prompt Sensitivity**: Outputs can depend on the specificity and clarity of the input prompt.