Trillama-8B is a 8B LLM that builds upon the foundation of Llama-3-8B, the lastest model from Meta. It's a fine-tune focused on improving the model's already strong logic and reasoning.

import transformers
import torch

model_id = "senseable/Trillama-8B"

pipeline = transformers.pipeline(
    "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
pipeline("Explain the meaning of life.")
Downloads last month
62
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for senseable/Trillama-8B

Quantizations
3 models

Spaces using senseable/Trillama-8B 6