OrcaAgent-llama3.2-1b

This model is finetuned on a subset from microsoft/orca-agentinstruct-1M-v1, dataset details and prompts can be found in Isotonic/agentinstruct-1Mv1-combined

Use

import torch
from transformers import pipeline
"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "user", "content": "\n\nYou are an expert text classifier. You need to classify the text below into one of the given classes. \n\nText:\n\nThe anticipation of the meteor shower has filled the astronomy club with an infectious excitement, as we prepare our telescopes for what could be a once-in-a-lifetime celestial event.\n\nClasses:\n\nAffirmative Sentiment;Mildly Affirmative Sentiment;Exuberant Endorsement;Objective Assessment;Critical Sentiment;Subdued Negative Sentiment;Intense Negative Sentiment;Ambivalent Sentiment;Sarcastic Sentiment;Ironical Sentiment;Apathetic Sentiment;Elation/Exhilaration Sentiment;Credibility Endorsement;Apprehension/Anxiety;Unexpected Positive Outcome;Melancholic Sentiment;Aversive Repulsion;Indignant Discontent;Expectant Enthusiasm;Affectionate Appreciation;Anticipatory Positivity;Expectation of Negative Outcome;Nuanced Sentiment Complexity\n\nThe output format must be:\n\nFinal class: {selected_class}\n\n"},
]
outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Downloads last month
45
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Isotonic/OrcaAgent-llama3.2-1b

Finetuned
(185)
this model
Quantizations
2 models

Datasets used to train Isotonic/OrcaAgent-llama3.2-1b