--- base_model: bofenghuang/vigogne-2-13b-instruct tags: - generated_from_trainer model-index: - name: PointCon-Vigogne-13B-LoRA results: [] --- # PointCon-Vigogne-13B-LoRA This model is a fine-tuned version of [bofenghuang/vigogne-2-13b-instruct](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8656 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0885 | 0.1 | 30 | 2.0357 | | 2.0024 | 0.19 | 60 | 1.9733 | | 1.9995 | 0.29 | 90 | 1.9406 | | 1.9752 | 0.38 | 120 | 1.9285 | | 1.9235 | 0.48 | 150 | 1.9060 | | 1.9345 | 0.57 | 180 | 1.8924 | | 1.8576 | 0.67 | 210 | 1.8818 | | 1.8693 | 0.76 | 240 | 1.8734 | | 1.8686 | 0.86 | 270 | 1.8695 | | 1.8814 | 0.95 | 300 | 1.8656 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1