labels missing
Hello,
from original FEVER (https://fever.ai/2018/task.html), we can see that
The training and development data will contain 4 fields:
__
id: The ID of the claim
label: The annotated label for the claim. Can be one of SUPPORTS|REFUTES|NOT ENOUGH INFO.
claim: The text of the claim.
evidence: A list of evidence sets (lists of [Annotation ID, Evidence ID, Wikipedia URL, sentence ID] tuples) or a [Annotation ID, Evidence ID, null, null] tuple if the label is NOT ENOUGH INFO.
(the Annotation ID and Evidence ID fields are for internal use only and are not used for scoring. They may help debug or correct annotation issues at a later point in time.)
__
Could you provide the label column/mapping as well? It is essential for the task.
Thank you
DK
Hi
@koleckar
, this dataset contains all the relevant information for running the task through mteb
. This is not a 1-1 reupload of the FEVER dataset (e.g. it only include the label in so far as it is included in the query relevance annotations).
You can see in the MTEB paper that this dataset is derived from BEIR. Feel free to give it a closer look.
Hi Kenneth, thank you for swift answer. We are aware of this dataset origin and mteb ingesting large portion of bier dss and eval scripts. We wondered what is the strategy then for IR eval. If the is label is missing, what you do if for query you have evidence that is in reality 'refutes' or 'not enough info'. We think such document should have small similarity to query embedding. Thank you
I think this is what you are looking for (from the BEIR paper):
D.9 Fact Checking
Fact checking verifies a claim against a big collection of evidence [60]. The task requires knowledge
about the claim and reasoning over multiple documents. We consider a sentence-level claim as input
and the relevant document passage verifying the claim as output.FEVER [60] The Fact Extraction and VERification dataset is collected to facilitate the automatic
fact checking. We utilize the original paper splits as queries Q and retrieve evidences from the
pre-processed Wikipedia Abstracts (June 2017 dump) as our corpus T.
from my reading it says that the data does not consider the 'refutes' or 'not enough info' (they are simply not queries)