Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

gCao
/
mistral-7b-dpo-arena

PEFT
Safetensors
fine-tuning
dpo
arena-dataset
lora
rlhf
Model card Files Files and versions
xet
Community
mistral-7b-dpo-arena
Ctrl+K
Ctrl+K
  • 1 contributor
History: 3 commits
gCao's picture
gCao
Add model card
3d854b9 verified 2 months ago
  • .gitattributes
    1.52 kB
    initial commit 2 months ago
  • README.md
    1.37 kB
    Add model card 2 months ago
  • adapter_config.json
    789 Bytes
    Add DPO model trained on Arena dataset 2 months ago
  • adapter_model.safetensors
    27.3 MB
    xet
    Add DPO model trained on Arena dataset 2 months ago
  • added_tokens.json
    51 Bytes
    Add DPO model trained on Arena dataset 2 months ago
  • chat_template.jinja
    196 Bytes
    Add DPO model trained on Arena dataset 2 months ago
  • special_tokens_map.json
    449 Bytes
    Add DPO model trained on Arena dataset 2 months ago
  • tokenizer.json
    3.51 MB
    Add DPO model trained on Arena dataset 2 months ago
  • tokenizer.model
    493 kB
    xet
    Add DPO model trained on Arena dataset 2 months ago
  • tokenizer_config.json
    1.44 kB
    Add DPO model trained on Arena dataset 2 months ago
  • training_args.bin

    Detected Pickle imports (11)

    • "accelerate.state.PartialState",
    • "trl.trainer.dpo_config.DPOConfig",
    • "transformers.trainer_utils.SaveStrategy",
    • "transformers.trainer_utils.IntervalStrategy",
    • "transformers.trainer_utils.SchedulerType",
    • "transformers.training_args.OptimizerNames",
    • "transformers.trainer_pt_utils.AcceleratorConfig",
    • "accelerate.utils.dataclasses.DistributedType",
    • "trl.trainer.dpo_config.FDivergenceType",
    • "torch.device",
    • "transformers.trainer_utils.HubStrategy"

    How to fix it?

    6.26 kB
    xet
    Add DPO model trained on Arena dataset 2 months ago