metadata
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
FlowerTune LoRA Model
This is a LoRA adapter for meta-llama/Llama-3.1-8B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on meta-llama/Llama-3.1-8B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
Links
- FlowerTune Homepage: https://huggingface.co/zjudai/FlowerTune
- FlowerTune Collection: https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439