{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load SQuAD data"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import json\n",
"import pandas as pd\n",
"\n",
"def display_text_df(df):\n",
" display(df.style.set_properties(**{'white-space': 'pre-wrap'}).set_table_styles(\n",
" [{'selector': 'th', 'props': [('text-align', 'left')]},\n",
" {'selector': 'td', 'props': [('text-align', 'left')]}\n",
" ]\n",
" ).hide())\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from data import get_data\n",
"data = get_data(download=False)\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',\n",
" 'Saint Bernadette Soubirous')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.question_answer_pairs[0]"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"
\n",
" \n",
" \n",
" Question | \n",
" Answer | \n",
"
\n",
" \n",
" \n",
" \n",
" What year was the Banská Akadémia founded? | \n",
" 1735 | \n",
"
\n",
" \n",
" What is another speed that can also be reported by the camera? | \n",
" SOS-based speed | \n",
"
\n",
" \n",
" Where were the use of advanced materials and techniques on display in Sumer? | \n",
" Sumerian temples and palaces | \n",
"
\n",
" \n",
" Who is elected every even numbered year? | \n",
" mayor | \n",
"
\n",
" \n",
" What was the purpose of top secret ICBM committee? | \n",
" decide on the feasibility of building an ICBM large enough to carry a thermonuclear weapon | \n",
"
\n",
" \n",
" What conferences became a requirement after Vatican II? | \n",
" National Bishop Conferences | \n",
"
\n",
" \n",
" Who does M fight with? | \n",
" C | \n",
"
\n",
" \n",
" How many species of fungi have been found on Antarctica? | \n",
" 1150 | \n",
"
\n",
" \n",
" After losing the battle of Guilford Courthouse, Cornawallis moved his troops where? | \n",
" Virginia coastline | \n",
"
\n",
" \n",
" What is the Olympic Torch made from? | \n",
" aluminum. | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"np.random.seed(42)\n",
"arr =np.array(data.question_answer_pairs)\n",
"n_samples = 10\n",
"indices = np.random.choice(len(arr), n_samples, replace=False)\n",
"random_sample = arr[indices]\n",
"# Display the questions and answers in the random sample as a dataframe\n",
"dfSample = pd.DataFrame(random_sample, columns=[\"Question\", \"Answer\"])\n",
"display_text_df(dfSample)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create the agent to be evaluated"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"from agent import get_agent\n",
"agent = get_agent()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the agent on the random sample of questions"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "4bce5a5c2449435dbd058ed938db2a91",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/10 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from gradio import ChatMessage\n",
"from transformers.agents import agent_types\n",
"from tqdm.notebook import tqdm\n",
"import logging\n",
"\n",
"answers_ref, answers_pred = [], [] \n",
"\n",
"# Suppress logging from the agent, which can be quite verbose\n",
"agent.logger.setLevel(logging.CRITICAL)\n",
"\n",
"for question, answer in tqdm(random_sample):\n",
" class Output:\n",
" output: agent_types.AgentType | str = None\n",
"\n",
" prompt = question\n",
" answers_ref.append(answer)\n",
" final_answer = agent.run(prompt, stream=False, reset=True)\n",
" answers_pred.append(final_answer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use semantic similarity to evaluate the agent's answers against the reference answers\n",
"\n",
"* One flaw of this approach is that it does not take into account the existence of multiple acceptable answers.\n",
"* It also does not benefit from having the context of the question. "
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"from semscore import EmbeddingModelWrapper\n",
"from statistics import mean\n",
"\n",
"answers_ref = [str(answer) for answer in answers_ref]\n",
"answers_pred = [str(answer) for answer in answers_pred]\n",
"\n",
"em = EmbeddingModelWrapper()\n",
"similarities = em.get_similarities(\n",
" em.get_embeddings( answers_pred ),\n",
" em.get_embeddings( answers_ref ),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
" \n",
" \n",
" Question | \n",
" Reference Answer | \n",
" Predicted Answer | \n",
" Similarity | \n",
"
\n",
" \n",
" \n",
" \n",
" What year was the Banská Akadémia founded? | \n",
" 1735 | \n",
" 1735 | \n",
" 1.000000 | \n",
"
\n",
" \n",
" What is another speed that can also be reported by the camera? | \n",
" SOS-based speed | \n",
" Average speed | \n",
" 0.433297 | \n",
"
\n",
" \n",
" Where were the use of advanced materials and techniques on display in Sumer? | \n",
" Sumerian temples and palaces | \n",
" Based on the information provided, it appears that the Sumerians developed and displayed advanced materials and techniques such as metrology, writing, and astronomy throughout their city-states. The specific locations where these advanced materials and techniques were on display are not explicitly mentioned.\n",
"\n",
"However, considering the context of the question, I would argue that the city-states of Sumer itself is the most relevant answer. The city-states of Sumer were the hub of Sumerian civilization, culture, and innovation, and it was likely there that these advanced materials and techniques were developed, displayed, and showcased.\n",
"\n",
"Therefore, my final answer to the user request is:\n",
"\n",
"The city-states of Sumer | \n",
" 0.545807 | \n",
"
\n",
" \n",
" Who is elected every even numbered year? | \n",
" mayor | \n",
" mayor | \n",
" 1.000000 | \n",
"
\n",
" \n",
" What was the purpose of top secret ICBM committee? | \n",
" decide on the feasibility of building an ICBM large enough to carry a thermonuclear weapon | \n",
" decide on the feasibility of building an ICBM large enough to carry a thermonuclear weapon | \n",
" 1.000000 | \n",
"
\n",
" \n",
" What conferences became a requirement after Vatican II? | \n",
" National Bishop Conferences | \n",
" ['National Bishop Conferences'] | \n",
" 0.937632 | \n",
"
\n",
" \n",
" Who does M fight with? | \n",
" C | \n",
" C | \n",
" 1.000000 | \n",
"
\n",
" \n",
" How many species of fungi have been found on Antarctica? | \n",
" 1150 | \n",
" Based on the output from the `squad_retriever` tool, I can see that there are two documents in the SQuAD dataset that answer the question \"How many species of fungi have been found on Antarctica?\".\n",
"\n",
"The first document states that about 1150 species of fungi have been recorded from Antarctica. The second document does not provide a different answer to this question.\n",
"\n",
"Therefore, my final answer is:\n",
"\n",
"There are approximately 1150 species of fungi that have been found on Antarctica. | \n",
" -0.020657 | \n",
"
\n",
" \n",
" After losing the battle of Guilford Courthouse, Cornawallis moved his troops where? | \n",
" Virginia coastline | \n",
" The Virginia coastline | \n",
" 0.948570 | \n",
"
\n",
" \n",
" What is the Olympic Torch made from? | \n",
" aluminum. | \n",
" aluminum | \n",
" 0.973508 | \n",
"
\n",
" \n",
"
\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Mean similarity: 0.78\n"
]
}
],
"source": [
"import pandas as pd\n",
"questions = [question for question, _ in random_sample]\n",
"dfAnswers = pd.DataFrame(list(zip(questions, answers_ref, answers_pred)), columns=[\"Question\", \"Reference Answer\", \"Predicted Answer\"])\n",
"dfAnswers[\"Similarity\"] = similarities\n",
"display(dfAnswers.style.set_properties(**{'white-space': 'pre-wrap'}).set_table_styles(\n",
" [{'selector': 'th', 'props': [('text-align', 'left')]},\n",
" {'selector': 'td', 'props': [('text-align', 'left')]}\n",
" ]\n",
").hide())\n",
"print(f\"Mean similarity: {round(mean(similarities), 2)}\")\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "aai520",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}