Edit model card

Model Card for MTG-Llama: Fine-Tuned Model for Magic: The Gathering

Model Details

Model Name: MTG-Llama
Version: 1.0
Base Model: Llama 3 8B Instruct
Fine-Tuning Dataset: MTG-Eval
Author: Jake Boggs

Model Description

MTG-Llama is a fine-tuned version of Llama 3 8B Instruct, tailored specifically for understanding and generating responses related to Magic: The Gathering (MTG). The model has been fine-tuned using a custom dataset, MTG-Eval, which includes question-answer pairs covering card descriptions, rules questions, and card interactions.

Intended Use

MTG-Llama is designed to assist users with:

  • Generating deck construction ideas.
  • Answering in-game rules questions.
  • Understanding card interactions and abilities.

Training Data

The fine-tuning dataset, MTG-Eval, consists of 80,032 question-answer pairs generated synthetically. The dataset is categorized into:

  • Card Descriptions: 26,702 examples
  • Rules Questions: 27,104 examples
  • Card Interactions: 26,226 examples

The data was sourced from the MTGJSON project and the Commander Spellbook combo database, reformatted into natural language question-answer pairs using ChatGPT 3.5.

Training Procedure

The model was fine-tuned using QLoRA with the following hyperparameters:

  • r: 64
  • alpha: 32
  • Steps: 75

Acknowledgments

Thanks to the team at Commander Spellbook for generously sharing their dataset, without which this research would not be possible. All generated data is unofficial Fan Content permitted under the Fan Content Policy. Not approved/endorsed by Wizards. Portions of the materials used are property of Wizards of the Coast. ©Wizards of the Coast LLC.

Resources

Downloads last month
33
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Dataset used to train jakeboggs/MTG-Llama