Model Card for Finance-Llama-8B

This model is a fine-tuned version of unsloth/Meta-Llama-3.1-8B on the Josephgflowers/Finance-Instruct-500k dataset. It's designed for financial tasks, reasoning, and multi-turn conversations.

Key Features

  • Extensive Coverage: Trained on over 500,000 entries spanning financial QA, reasoning, sentiment analysis, topic classification, multilingual NER, and conversational AI.πŸ“š
  • Multi-Turn Conversations: Capable of rich dialogues emphasizing contextual understanding and reasoning.
  • Diverse Data Sources: Includes entries from Cinder, Sujet-Finance-Instruct-177k, Phinance Dataset, BAAI/IndustryInstruction_Finance-Economics, Josephgflowers/Financial-NER-NLP, and many other high-quality datasets.
  • Financial Specialization: Tailored for financial reasoning, question answering, entity recognition, sentiment analysis, and more.

Dataset Details πŸ’Ύ

Finance-Instruct-500k Dataset

Overview Finance-Instruct-500k is a comprehensive and meticulously curated dataset designed to train advanced language models for financial tasks, reasoning, and multi-turn conversations. Combining data from numerous high-quality financial datasets, this corpus provides over 500,000 entries, offering unparalleled depth and versatility for finance-related instruction tuning and fine-tuning.

The dataset includes content tailored for financial reasoning, question answering, entity recognition, sentiment analysis, address parsing, and multilingual natural language processing (NLP). Its diverse and deduplicated entries make it suitable for a wide range of financial AI applications, including domain-specific assistants, conversational agents, and information extraction systems.

Key Features of the Dataset

  • Extensive Coverage: Over 500,000 entries spanning financial QA, reasoning, sentiment analysis, topic classification, multilingual NER, and conversational AI.🌍
  • Multi-Turn Conversations: Rich dialogues emphasizing contextual understanding and reasoning.πŸ—£οΈ
  • Diverse Data Sources: Includes entries from Cinder, Sujet-Finance-Instruct-177k, Phinance Dataset, BAAI/IndustryInstruction_Finance-Economics, Josephgflowers/Financial-NER-NLP, and many other high-quality datasets. πŸ“–

Usage

This model can be used with the transformers library pipeline for text generation.

First, make sure you have the transformers and torch libraries installed:

pip install transformers torch

Ollama

You can also use this model with Ollama. Pre-built GGUF versions (FP16 and Q4_K_M) are available at: ollama.com/martain7r/finance-llama-8b

To run the FP16 version:

ollama run martain7r/finance-llama-8b:fp16

To run the Q4_K_M quantized version (smaller and faster, with a slight trade-off in quality):

ollama run martain7r/finance-llama-8b:q4_k_m

Usage πŸš€ Transformers Pipeline

from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
import torch

# Alternative memory-efficient loading options without bitsandbytes

model_id = "tarun7r/Finance-Llama-8B"

print("Loading model with memory optimizations...")

# Option 1: Use FP16 (half precision) - reduces memory by ~50%
try:
    print("Trying FP16 loading...")
    model = AutoModelForCausalLM.from_pretrained(
        model_id,
        torch_dtype=torch.float16,  # Half precision
        device_map="auto",          # Automatic device placement
        low_cpu_mem_usage=True,     # Efficient CPU memory usage during loading
        trust_remote_code=True
    )
    print("βœ“ Model loaded with FP16")
    
except Exception as e:
    print(f"FP16 loading failed: {e}")
    
    # Option 2: CPU offloading - some layers on GPU, some on CPU
    try:
        print("Trying CPU offloading...")
        model = AutoModelForCausalLM.from_pretrained(
            model_id,
            torch_dtype=torch.float16,
            device_map="balanced",      # Balance between GPU and CPU
            low_cpu_mem_usage=True,
            trust_remote_code=True
        )
        print("βœ“ Model loaded with CPU offloading")
        
    except Exception as e:
        print(f"CPU offloading failed: {e}")
        
        # Option 3: Full CPU loading as fallback
        print("Loading on CPU...")
        model = AutoModelForCausalLM.from_pretrained(
            model_id,
            torch_dtype=torch.float16,
            device_map="cpu",
            low_cpu_mem_usage=True,
            trust_remote_code=True
        )
        print("βœ“ Model loaded on CPU")

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

# Create pipeline
generator = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer
)

print("βœ“ Pipeline created successfully!")


# Your existing prompt code
finance_prompt_template = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
"""

# Update the system prompt to provide a more detailed description of the chatbot's role
messages = [
    {"role": "system", "content": "You are a highly knowledgeable finance chatbot. Your purpose is to provide accurate, insightful, and actionable financial advice to users, tailored to their specific needs and contexts."},
    {"role": "user", "content": "What strategies can an individual investor use to diversify their portfolio effectively in a volatile market?"},
]

# Update the generator call to use the messages
prompt = "\n".join([f"{msg['role'].capitalize()}: {msg['content']}" for msg in messages])

print("\n--- Generating Response ---")

try:
    outputs = generator(
        prompt,
        #max_new_tokens=250,         # Reduced for memory efficiency
        do_sample=True,
        temperature=0.7,
        top_p=0.9,
        pad_token_id=tokenizer.eos_token_id,
        # Memory efficient generation settings
        num_beams=1,                # No beam search to save memory
        early_stopping=True,
        use_cache=True
    )
    
    # Extract response
    generated_text = outputs[0]['generated_text']
    response_start = generated_text.rfind("### Response:")
    if response_start != -1:
        response = generated_text[response_start + len("### Response:"):].strip()
        print("\n--- Response ---")
        print(response)
    else:
        print(generated_text)
        
    # Clean up GPU memory after generation
    if torch.cuda.is_available():
        torch.cuda.empty_cache()
        
except Exception as e:
    print(f"Generation error: {e}")

Citation πŸ“Œ

@misc{tarun7r/Finance-Llama-8B,
  author    = {tarun7r},
  title     = {tarun7r/Finance-Llama-8B: A Llama 3.1 8B Model Fine-tuned on Josephgflowers/Finance-Instruct-500k},
  year      = {2025},
  publisher = {Hugging Face},
  journal   = {Hugging Face Model Hub},
  howpublished = {\url{https://huggingface.co/tarun7r/Finance-Llama-8B}}
}
Downloads last month
374
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tarun7r/Finance-Llama-8B

Finetuned
(241)
this model

Dataset used to train tarun7r/Finance-Llama-8B