π»ββοΈCOKAL-v1_70Bπ»ββοΈ
Model Details
Model Developers Seungyoo Lee (DopeorNope)
Input Models input text only.
Output Models generate text only.
Model Architecture
COKAL-v1_70B is an auto-regressive 70B language model based on the LLaMA2 transformer architecture.
Base Model
Training Dataset
- SFT training dataset: garage-bAInd/Open-Platypus
Training
I developed the model in an environment with A100 x 8
Implementation Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "DopeorNope/COKAL-v1_70B"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)
- Downloads last month
- 1,442
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.