File size: 2,395 Bytes
fded375
 
823d3e5
c7fc621
a0acf3e
5dc1c6d
823d3e5
 
708c8bf
740c5e6
 
 
 
fded375
830160c
 
 
 
3ab1f76
c7fc621
3ab1f76
cb265aa
830160c
 
 
 
cb265aa
 
 
 
 
 
 
 
 
 
830160c
 
 
 
 
 
38da227
830160c
 
 
3ab1f76
 
 
 
 
 
 
 
5dc1c6d
3ab1f76
830160c
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: other
widget:
- text: 'What is the capital of France?'
- text: 'What is wikipedia?'
- text: 'What is a meme?'
language:
- en
pipeline_tag: text-generation
tags:
- conversational
- chat
- assistant
---
# OPT-1.3b-Chat

This is a text generation model based on the [OPT-1.3B](https://huggingface.co/facebook/opt-1.3b) model from Meta, trained using the Deepspeed library. The model can generate natural and engaging conversational responses given a user input.

A Demo is [available here](https://huggingface.co/spaces/KoalaAI/OPT-Chat)
The model is best at simple Q&A style questions, not open-ended ones like ChatGPT.

## Training Details

- The base model is [OPT-1.3B](https://huggingface.co/facebook/opt-1.3b), a decoder-only transformer with 1.3 billion parameters, pre-trained on a large text corpus using the causal language modeling objective.
- The model was trained on a single NVIDIA A100 GPU using the Deepspeed pipeline parallelism and ZeRO optimizer.

## Model Details
- Number of parameters: 1.3 billion
- Number of layers: 24
- Number of attention heads: 16
- Context size: 2048
- Vocabulary size: 50,265
- Embedding size: 1280
- Feed-forward size: 5120
- Dropout rate: 0.1

## Usage

You can use this model directly with the Hugging Face pipeline for text generation:

```python
from transformers import pipeline
generator = pipeline('text-generation', model='DarwinAnim8or/OPT-1.3b-Chat')
generator("Hello, how are you?")
```

### Suggested formatting
The training data uses the following format:
```
Human: <question>
Assistant: <answer>
```

It is recommended to follow the same format as closely as possible for the best results.
We do intend on creating another model that is trained on the openassistant dataset in the future. 

## License
This model is licensed under the [OPT-175B license](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/MODEL_LICENSE.md), which is a non-commercial research license. Please read the full license terms before using this model.

## Ethical Considerations
This model is intended for research purposes only and should not be used for any malicious or harmful applications. The model may generate offensive or inappropriate content that does not reflect the views or opinions of the authors or Microsoft. Users are responsible for ensuring that the generated content complies with ethical and legal standards.