Edit model card

This model is randomly initialized, using the config from neuralmagic/mpt-7b-gsm8k-pt but with smaller size. Note the model is in float16.

Codes:

from transformers import pipeline
from huggingface_hub import create_repo, upload_folder
import torch
import transformers
import os

model_id = 'neuralmagic/mpt-7b-gsm8k-pt'
save_path = '/tmp/yujiepan/mpt-tiny-random'
repo_id = 'yujiepan/mpt-tiny-random'

config = transformers.AutoConfig.from_pretrained(model_id)
config.hidden_size = 8
config.n_embd_model = 8
config.num_attention_heads = 2
config.n_heads = 2
config.num_hidden_layers = 2
config.n_layers = 2
print(config)

model = transformers.AutoModelForCausalLM.from_config(config, torch_dtype=torch.float16)
model = model.half()
model.save_pretrained(save_path)

tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.save_pretrained(save_path)

from optimum.intel.openvino import OVModelForCausalLM
ovmodel = OVModelForCausalLM.from_pretrained(save_path, export=True)
ovmodel = ovmodel.half()
ovmodel.save_pretrained(save_path)

os.system(f'ls -alh {save_path}')
create_repo(repo_id, exist_ok=True)
upload_folder(repo_id=repo_id, folder_path=save_path)
Downloads last month
37
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including yujiepan/mpt-tiny-random