metadata
language:
- en
pipeline_tag: text-generation
Model Card for Model ID
This model aims to handle Multi-hop Question answering by splitting a multi-hop questions into a sequence of single questions, handle these single questions then summarize the information to get the final answer.
Model Details
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the dataset: khaimaitien/qa-expert-multi-hop-qa-V1.0
You can get more information about how to use/train the model from this repo: https://github.com/khaimt/qa_expert
Model Sources [optional]
- Repository: [https://github.com/khaimt/qa_expert]
How to Get Started with the Model
First, you need to clone the repo: https://github.com/khaimt/qa_expert
Then install the requirements:
pip install -r requirements.txt
Here is the example code:
from qa_expert import get_inference_model, InferenceType
def retrieve(query: str) -> str:
# You need to implement this retrieval function, input is a query and output is a string
# This can be treated as the function to call in function calling of OpenAI
return context
model_inference = get_inference_model(InferenceType.hf, "khaimaitien/qa-expert-7B-V1.0")
answer, messages = model_inference.generate_answer(question, retriever_func)