File size: 1,367 Bytes
a74e671
 
a814ee2
a74e671
 
 
 
 
 
 
 
 
 
 
 
bd02fc4
a74e671
bd02fc4
a74e671
bd02fc4
 
 
a74e671
 
 
bd02fc4
a74e671
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a814ee2
a74e671
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
model-index:
- name: Model_1A_Clinton
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Model_1A_Clinton

This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on a large corpus of William J. Clinton's second term discourse on terrorism.

## To Prompt the Model

Try entering single words or short phrases, such as "terrorism is" or "national security" or "our foreign policy should be", 
in the dialogue box on the right hand side of this page. 
Then click on 'compute' and wait for the results. The model will take a few seconds to load on your first prompt.

## Intended uses & limitations

This model is intended as an experiment on the utility of LLMs for discourse analysis on a specific corpus of political rhetoric.


### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0

### Framework versions

- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0