gpt2-tuned-expanded
Fine-tuned GPT-2 model on speech transcription data
Model Details
- Base Model: gpt2
- Fine-tuned from checkpoint: /home/klp65/rds/hpc-work/whisper-lm/train_gpt/gpt_expanded_corpora/checkpoint-1484745
- Language: English
- Model Type: Causal Language Model
Usage
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("pkailin2002/gpt2-tuned-expanded")
tokenizer = GPT2Tokenizer.from_pretrained("pkailin2002/gpt2-tuned-expanded")
# Generate text
input_text = "Your prompt here"
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(inputs, max_length=100, num_return_sequences=1, temperature=0.7)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Training Details
This model was fine-tuned using the Hugging Face Transformers library.
Intended Use
This model is intended for research and educational purposes.
Limitations
Please be aware that language models can generate biased or inappropriate content. Use responsibly.
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for pkailin2002/gpt2-tuned-expanded
Base model
openai-community/gpt2