tao-shen's picture
Upload LoRA adapter
27007a6 verified
metadata
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
  - peft
  - lora
  - federated-learning
  - flower
datasets:
  - vicgalle/alpaca-gpt4

FlowerTune LoRA Model

This is a LoRA adapter for meta-llama/Llama-3.2-1B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.

Training Details

  • Dataset: vicgalle/alpaca-gpt4
  • Training method: Federated LoRA fine-tuning with FlowerTune
  • Framework: Flower

This model is a LoRA adapter fine-tuned on meta-llama/Llama-3.2-1B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.

Links