π¬π©βπ¬ Newton-7B
This model is a fine-tuned version of openchat/openchat-3.5-0106 on datasets related to science.
This model is fine-tuned using QLoRa and axolotl.
This model's training was sponsored by sablo.ai.
See axolotl config
axolotl version: 0.3.0
base_model: openchat/openchat-3.5-0106
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: merged_all.json
type:
field_instruction: instruction
field_output: output
format: "GPT4 Correct User: {instruction}<|end_of_turn|>GPT4 Correct Assistant:"
no_input_format: "GPT4 Correct User: {instruction}<|end_of_turn|>GPT4 Correct Assistant:"
dataset_prepared_path: last_run_prepared
val_set_size: 0.01 # not sure
output_dir: ./newton
adapter: qlora
lora_model_dir:
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
lora_r: 128
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: huggingface
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
hub_model_id: Weyaxi/newton-lora
save_safetensors: true
# change #
gradient_accumulation_steps: 12
micro_batch_size: 6
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
# change #
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10 # not sure
saves_per_epoch: 2
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
debug:
deepspeed:
weight_decay: 0.1 # not sure
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
tokens:
- "<|end_of_turn|>"
- "<|pad_0|>"
π Datasets
You can find the dataset I used and the work I am doing with this datasets here:
https://huggingface.co/datasets/Weyaxi/sci-datasets
Following datasets were used in this model:
π MATH
π§ ARC (Note: Only train part)
𧲠camel-ai/physics
βοΈ camel-ai/chemistry
π¦ camel-ai/biology
π camel-ai/math
π openbookqa
π§ piqa
π¨ reclor
π¬ scibench
π§ͺ ScienceQA
𧬠sciq
π ScienceEval
π οΈ Multiple Choice Question & Answer Datasets Conversion Progress
I used mistralai/Mixtral-8x7B-Instruct-v0.1 to generate a reasonable and logical answer by providing it with the question and the answer key.
I used the Together AI API for this task.
The following datasets are converted using this method:
π§ ARC (Note: Only train part)
π openbookqa
π¨ reclor
𧬠sciq
π¬ Prompt Template
You can use this prompt template while using the model:
GPT4 Correct (Openchat)
GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant: {asistant}<|end_of_turn|>GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant:
You can also utilize the chat template method from the tokenizer config like here:
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
{"role": "user", "content": "How are you today?"}
]
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
π€ Acknowledgments
Thanks to openchat team for fine-tuning an excellent model that I used as a base model.
Thanks to @jondurbin for reformatting codes for some datasets: bagel/data_sources
Thanks to Together AI for providing everyone with free credits, which I used to generate a dataset in multiple choice to explanations format.
Thanks to Tim Dettmers for his excellent QLoRA work.
Thanks to all the dataset authors mentioned in the datasets section.
Thanks to axolotl for making the repository I used to make this model.
Overall, thanks to all of the open soure AI community! π
If you would like to support me:
- Downloads last month
- 11