QuantFactory Banner

QuantFactory/Open-Insurance-LLM-Llama3-8B-GGUF

This is quantized version of Raj-Maharajwala/Open-Insurance-LLM-Llama3-8B created using llama.cpp

Original Model Card

Open-Insurance-LLM-Llama3-8B

This model is a domain-specific language model based on Nvidia Llama 3 ChatQA, fine-tuned for insurance-related queries and conversations. It leverages the architecture of Llama 3 and is specifically trained to handle insurance domain tasks.

Model Details

  • Model Type: Instruction-tuned Language Model
  • Base Model: nvidia/Llama3-ChatQA-1.5-8B
  • Finetuned Model: Raj-Maharajwala/Open-Insurance-LLM-Llama3-8B
  • Quantized Model: Raj-Maharajwala/Open-Insurance-LLM-Llama3-8B-GGUF
  • Model Architecture: Llama
  • Parameters: 8.05 billion
  • Developer: Raj Maharajwala
  • License: llama3
  • Language: English

Quantized Model

Raj-Maharajwala/Open-Insurance-LLM-Llama3-8B-GGUF: https://huggingface.co/Raj-Maharajwala/Open-Insurance-LLM-Llama3-8B-GGUF

Training Data

The model has been fine-tuned on the InsuranceQA dataset using LoRA (8 bit), which contains insurance-specific question-answer pairs and domain knowledge. trainable params: 20.97M || all params: 8.05B || trainable %: 0.26%

LoraConfig(
  r=8,
  lora_alpha=32,
  lora_dropout=0.05,
  bias="none",
  task_type="CAUSAL_LM",
  target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)

Model Architecture

The model uses the Llama 3 architecture with the following key components:

  • 8B parameter configuration
  • Enhanced attention mechanisms from Llama 3
  • ChatQA 1.5 instruction-tuning framework
  • Insurance domain-specific adaptations

Files in Repository

  • Model Files:

    • model-00001-of-00004.safetensors (4.98 GB)
    • model-00002-of-00004.safetensors (5 GB)
    • model-00003-of-00004.safetensors (4.92 GB)
    • model-00004-of-00004.safetensors (1.17 GB)
    • model.safetensors.index.json (24 kB)
  • Tokenizer Files:

    • tokenizer.json (17.2 MB)
    • tokenizer_config.json (51.3 kB)
    • special_tokens_map.json (335 Bytes)
  • Configuration Files:

    • config.json (738 Bytes)
    • generation_config.json (143 Bytes)

Use Cases

This model is specifically designed for:

  • Insurance policy understanding and explanation
  • Claims processing assistance
  • Coverage analysis
  • Insurance terminology clarification
  • Policy comparison and recommendations
  • Risk assessment queries
  • Insurance compliance questions

Limitations

  • The model's knowledge is limited to its training data cutoff
  • Should not be used as a replacement for professional insurance advice
  • May occasionally generate plausible-sounding but incorrect information

Bias and Ethics

This model should be used with awareness that:

  • It may reflect biases present in insurance industry training data
  • Output should be verified by insurance professionals for critical decisions
  • It should not be used as the sole basis for insurance decisions
  • The model's responses should be treated as informational, not as legal or professional advice

Citation and Attribution

If you use this model in your research or applications, please cite:

@misc{maharajwala2024openinsurance,
  author = {Raj Maharajwala},
  title = {Open-Insurance-LLM-Llama3-8B},
  year = {2024},
  publisher = {HuggingFace},
  url = {https://huggingface.co/Raj-Maharajwala/Open-Insurance-LLM-Llama3-8B}
}
Downloads last month
816
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/Open-Insurance-LLM-Llama3-8B-GGUF

Quantized
(20)
this model