arshiaafshani commited on
Commit
06f245a
·
verified ·
1 Parent(s): ee1c870

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -19,7 +19,7 @@ pipeline_tag: text-generation
19
 
20
  # Arsh-llm: A Compact 500M Parameter Powerhouse 🚀
21
 
22
- **Arsh-llm** is a 500-million-parameter language model built on the Llama architecture, designed to shine in generating creative stories, coherent text, and functional code. Pretrained for 35 hours on a T4 GPU using a curated mix of small yet powerful datasets, and fine-tuned for 5 hours on conversational data, this model is a lean, mean, text-generating machine with massive potential. With a training loss between **1.2–1.9**, it’s already showing promise and is ready to level up with more training. Buckle up—this is just the beginning! 😎
23
 
24
  ## Model Overview
25
 
@@ -77,7 +77,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
77
  ## Training Details
78
 
79
  - **Pretraining**: Conducted on a T4 GPU for \~35 hours using a mix of TinyStories, WikiText, and other datasets to build a strong foundation in text and story generation.
80
- - **Fine-tuning**: 5 hours on ShareGPT-based conversational data with a structured chat template to enhance dialogue capabilities.
81
  - **Hardware**: NVIDIA T4 GPU (15GB VRAM).
82
  - **Training Loss**: Achieved 1.2–1.9, indicating solid performance with significant potential for improvement through extended training.
83
 
 
19
 
20
  # Arsh-llm: A Compact 500M Parameter Powerhouse 🚀
21
 
22
+ **Arsh-llm** is a 500-million-parameter language model built on the Llama architecture, designed to shine in generating creative stories, coherent text, and functional code. Pretrained for 35 hours on a T4 GPU using a curated mix of small yet powerful datasets, and fine-tuned for 15 hours on conversational data, this model is a lean, mean, text-generating machine with massive potential. With a training loss between **1.2–1.9**, it’s already showing promise and is ready to level up with more training. Buckle up—this is just the beginning! 😎
23
 
24
  ## Model Overview
25
 
 
77
  ## Training Details
78
 
79
  - **Pretraining**: Conducted on a T4 GPU for \~35 hours using a mix of TinyStories, WikiText, and other datasets to build a strong foundation in text and story generation.
80
+ - **Fine-tuning**: 15 hours on ShareGPT-based conversational data with a structured chat template to enhance dialogue capabilities.
81
  - **Hardware**: NVIDIA T4 GPU (15GB VRAM).
82
  - **Training Loss**: Achieved 1.2–1.9, indicating solid performance with significant potential for improvement through extended training.
83