Edit model card

Mistral 7B V0.1

Implementation of Mistral 7B model by the phospho team. You can test it directly in the HuggingFace space.

Use in transformers

import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, pipeline, TextStreamer

tokenizer = LlamaTokenizer.from_pretrained("phospho-app/mistral_7b_V0.1")
model = LlamaForCausalLM.from_pretrained("phospho-app/mistral_7b_V0.1", torch_dtype=torch.bfloat16)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
Downloads last month
43
Inference Examples
Inference API (serverless) is not available, repository is disabled.