Model Card for Model ID
Description:
This model is a LoRA-adapted version of Llama-2-7b-chat fine-tuned for abstractive summarization of Norwegian text. It uses low-rank adaptation (LoRA) applied to key projection layers of the LLaMA model and has been trained on the NorSumm dataset (SamiaT/NorSumm) with 144 training examples, validated on 36 examples, and tested on 198 examples.
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Model type: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
- Finetuned from model [optional]: [More Information Needed]
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Abstractive summarization of Norwegian-language articles, user reviews, and short documents.
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
Data:
Train: 144 examples from SamiaT/NorSumm
Validation: 36 examples
Test: 198 examples
Preprocessing:
Tokenized with AutoTokenizer from Hugging Face Transformers.
Inputs truncated to 512 tokens, summaries to 128 tokens.
Model tree for GloriaABK1/llama-2-7b-chat-norwegian-sum
Base model
RuterNorway/Llama-2-7b-chat-norwegian