FinLang/finance-chat-model-investopedia

This Large Language Model (LLM) is an instruct fine-tuned version of the mistralai/Mistral-7B-v0.1 using our open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team

This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses.

Plans

The research paper will be published soon. We are working on a v2 version of the model where we are increasing the training corpus of financial data and using improved techniques for training models.

How to Get Started with the Model

import torch

from peft import AutoPeftModelForCausalLM

from transformers import AutoTokenizer, pipeline

model_id='FinLang/investopedia_chat_model'

model = AutoPeftModelForCausalLM.from_pretrained(
  model_id,
  device_map="auto",
  torch_dtype=torch.float16
)

tokenizer = AutoTokenizer.from_pretrained(model_id)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n        try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n        CONTEXT:\n        D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]

prompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)

outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)

print(f"Query:\n{example[1]['content']}")

print(f"Context:\n{example[0]['content']}")

print(f"Original Answer:\n{example[2]['content']}")

print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}")

Training Details

Peft Config :

{
 'Technqiue' : 'QLORA',
 'rank': 256,
 'target_modules' : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",],
 'lora_alpha' : 128,
 'lora_dropout' : 0, 
 'bias': "none",    
}
    
Hyperparameters:

{
    "epochs": 3,
    "evaluation_strategy": "epoch",
    "gradient_checkpointing": True,
    "max_grad_norm" : 0.3,
    "optimizer" : "adamw_torch_fused",
    "learning_rate" : 2e-4,
    "lr_scheduler_type": "constant",
    "warmup_ratio" : 0.03,
    "per_device_train_batch_size" : 4,  
    "per_device_eval_batch_size" : 4,
    "gradient_accumulation_steps" : 4
}

Evaluation

We evaluated the model on test set (22.9k records) of https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset. Evaluation was done using Proprietary LLM as judge on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best). Model got an average score of 4.58 out of 5. Human Evaluation was performed on random sample of 10k records and we found approx 80% aligment between human & Proprietary LLM.

Bias, Risks, and Limitations

This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

License

Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.

Citation [coming soon]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.