YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Llama-3.1-8B-Table-Finetuned
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on table-based question answering tasks.
Model Details
- Base Model: meta-llama/Meta-Llama-3.1-8B-Instruct
- Fine-tuning Method: QLoRA (4-bit Quantized Low-Rank Adaptation)
- Context Length: 16K tokens
- Training Data: Table-based question answering dataset
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("pandoradox/llama-3.1-8B-table-finetuned_1.2k")
tokenizer = AutoTokenizer.from_pretrained("pandoradox/llama-3.1-8B-table-finetuned_1.2k")
# Format your input with a table
prompt = '''
<|system|>
You are an expert at analyzing tables and answering questions about them.
<|end|>
<|user|>
Based on the following table:
Table title: Example Table
Headers: Name, Age, City
Row 1: John, 30, New York
Row 2: Jane, 25, Boston
Row 3: Bob, 35, Chicago
Question: Who is the oldest person in the table?
<|end|>
<|assistant|>
'''
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=2048)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Limitations
- The model may struggle with extremely complex tables or ambiguous questions
- Performance may vary based on how tables are formatted in the input prompt
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support