training code availability
we want to train a model using different prompt. wondering if training code is available?
We are still discussing whether we want to share the training data and logic more publicly. There is no 'prompt' per se for the model we trained, only for when we call the LLMs like GPT 4 we used the prompt which we shared (see our Github here for details - https://github.com/vectara/hallucination-leaderboard/tree/main). I will try and add some snippets soon that explain how to call the model. You basically just train using sentence transformers but modify the output to be a single label.
So if you want to load our model you don't need this but if you want to use a different cross encoder you need to load it as follows:
# Then load some training examples as such:
train_examples = []
for i, row in df_td_train.iterrows():
train_examples.append(InputExample(texts=[row['grounding'], row['generated_text']], label=int(row['label'])))
# Then train the model as such as per the Cross Encoder API:
train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=train_batch_size)
warmup_steps = math.ceil(len(train_dataloader) * num_epochs * 0.1) #10% of train data for warm-up
model.fit(train_dataloader=train_dataloader,
evaluator=test_evaluator,
epochs=num_epochs,
evaluation_steps=14_000,
warmup_steps=warmup_steps,
output_path=model_save_path,
show_progress_bar=True)
Added to the readme.md. We are not looking to share the training data at this time, however.