WI2C Llama 3.1 70B Absurdist Adapter

This is a LoRA adapter fine-tuned on Llama 3.1 70B Instruct for generating authentic absurdist memes and humor in the style of internet culture.

Model Details

Model Description

WI2C (What If It Could) is an AI research project focused on generating authentic absurdist memes. This adapter has been fine-tuned on carefully curated absurdist content to capture the unique voice and humor style of internet absurdism.

  • Developed by: mistakeknot
  • Model type: LoRA Adapter for Causal Language Model
  • Language(s) (NLP): English
  • License: Llama 3.1 Community License
  • Finetuned from model: Meta-Llama-3.1-70B-Instruct

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

This model is designed for:

  • Generating absurdist meme captions
  • Creating surreal humor content
  • Entertainment and creative writing purposes
  • Research into AI-generated humor

Downstream Use [optional]

The model can be integrated into:

  • Meme generation applications
  • Creative writing tools
  • Social media content creation platforms

Out-of-Scope Use

This model should NOT be used for:

  • Generating harmful, offensive, or inappropriate content
  • Creating misinformation or deceptive content
  • Commercial use without proper attribution
  • Any use that violates the Llama 3.1 Community License

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

# Load base model
base_model = "meta-llama/Meta-Llama-3.1-70B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
    base_model,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Load adapter
model = PeftModel.from_pretrained(model, "mistakeknot/wi2c-llama3-1")

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model)

# Generate absurdist content
prompt = "Write an absurdist meme caption:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.8)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Framework versions

  • PEFT 0.15.1
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mistakeknot/wi2c-llama3-1