Text Generation
Transformers
PyTorch
code
gpt2
custom_code
Eval Results
text-generation-inference
Inference Endpoints
nickhugs commited on
Commit
fb44eff
1 Parent(s): 636ad0a

Update eos_token_id / bos_token_id in config.json

Browse files

The current values don't appear to even be in the vocab. This makes it match the recent updates done to the tokenizer config.

Files changed (1) hide show
  1. config.json +2 -2
config.json CHANGED
@@ -9,9 +9,9 @@
9
  "AutoConfig": "configuration_gpt2_mq.GPT2CustomConfig",
10
  "AutoModelForCausalLM": "modeling_gpt2_mq.GPT2LMHeadCustomModel"
11
  },
12
- "bos_token_id": 50256,
13
  "embd_pdrop": 0.1,
14
- "eos_token_id": 50256,
15
  "initializer_range": 0.02,
16
  "layer_norm_epsilon": 1e-05,
17
  "model_type": "gpt2",
 
9
  "AutoConfig": "configuration_gpt2_mq.GPT2CustomConfig",
10
  "AutoModelForCausalLM": "modeling_gpt2_mq.GPT2LMHeadCustomModel"
11
  },
12
+ "bos_token_id": 49152,
13
  "embd_pdrop": 0.1,
14
+ "eos_token_id": 49152,
15
  "initializer_range": 0.02,
16
  "layer_norm_epsilon": 1e-05,
17
  "model_type": "gpt2",