Tomasal's picture
Update Model card
1a6e1fa verified
metadata
base_model: allenai/OLMoE-1B-7B-0125-Instruct
library_name: transformers
model_name: OLMoE-1B-7B-0125-Instruct-enron
tags:
  - text-generation
  - large-language-model
  - fine-tuning
  - enron
license: apache-2.0
datasets:
  - LLM-PBE/enron-email

Model Card for Tomasal/OLMoE-1B-7B-0125-Instruct-enron

This model is a part of the master thesis work: Assessing privacy vs. efficiency tradeoffs in open-source Large-Language Models, during spring 2025 with focus to investigate privace issues i opensource LLMs.

Model Details

This model is a fine-tuned version of allenai/OLMoE-1B-7B-0125-Instruct, using LoRA (Low-Rank Adaptation). It has been traind for three epochs on the Enron email dataset: LLM-PBE/enron-email. The goal of the fine-tuning is to explore how models memorize and potentially expose sensitive content when trained on sensitive information.

Training Procedure

The model was fine-tuned using LoRA with the following configuration:

  • LoRA rank: 8
  • LoRA Alpha: 32
  • LoRA Dropout: 0.05
  • LoRA Bias: None
  • Optimizer: AdamW with learning rate 1e-4
  • Precision: bfloat16 (merged model saved in float32)
  • Epochs: 3
  • Batch size: 32
  • Hardware: NVIDIA GeForce RTX 5090

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Tomasal/OLMoE-1B-7B-0125-Instruct-enron", torch_dtype="bfloat16")
tokenizer = AutoTokenizer.from_pretrained("Tomasal/OLMoE-1B-7B-0125-Instruct-enron")

messages = [{"role": "user", "content": "Can you write a professional email confirming a meeting with the legal team on Monday at 10am?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=128) 
print(tokenizer.decode(outputs[0], skip_special_tokens=True))