kgreenewald commited on
Commit
8af1fc1
·
verified ·
1 Parent(s): 2675578

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +155 -3
  2. adapter_config.json +35 -0
  3. adapter_model.safetensors +3 -0
README.md CHANGED
@@ -1,3 +1,155 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ibm-granite/granite-3.2-8b-instruct
3
+ library_name: peft
4
+ ---
5
+
6
+ # LoRA Adapter for Answerability Classification
7
+ Welcome to Granite Experiments!
8
+
9
+ Think of Experiments as a preview of what's to come. These projects are still under development, but we wanted to let the open-source community take them for spin! Use them, break them, and help us build what's next for Granite – we'll keep an eye out for feedback and questions. Happy exploring!
10
+
11
+ Just a heads-up: Experiments are forever evolving, so we can't commit to ongoing support or guarantee performance.
12
+
13
+ # Model Summary
14
+ This is a LoRA adapter for [ibm-granite/granite-3.2-8b-instruct](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct) that is fine-tuned for binary answerability classification task. The model takes as input a multi-turn conversation and a set of documents, and classifies whether the user's final query is answerable or unanswerable based on the available information in the documents.
15
+
16
+ - **Developer:** IBM Research
17
+ - **Model type:** LoRA adapter for [ibm-granite/granite-3.2-8b-instruct](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct)
18
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
19
+
20
+
21
+ ## Intended use
22
+ This is a LoRA adapter that enables answerability classification for the final user query in a multi-turn conversation, with respect to a set of provided documents. The model is trained to determine whether the last user query is answerable or unanswerable, based solely on the information present in the documents. This makes it suitable for applications involving RAG and document-grounded chatbots, where knowing whether sufficient information exists to answer a query is crucial. The classification output from the answerability model can be used in several downstream applications, including but not limited to:
23
+ - Filter out unanswerable questions before sending them to generation in RAG setting. By classifying a query as unanswerable upfront, the system can prevent hallucinated or misleading responses.
24
+ - Re-query the retriever to get more relevant documents. If a query is initially deemed unanswerable, the retriever can be re-invoked with alternate formulations to fetch more relevant documents.
25
+
26
+
27
+ **Model input**: The input to the model is a list of conversational turns and a list of documents converted to a string using `apply_chat_template` function. These turns can alternate between the `user` and `assistant` roles. The last turn is from the `user`. The list of documents is a dictionary with `text` field, which contains the text of the corresponding document.
28
+
29
+ To prompt the LoRA adapter to determine answerability, a special answerability role is used to trigger this capability of the model. The role includes the keyword "answerability": `<|start_of_role|>answerability<|end_of_role|>`
30
+
31
+ **Model output**: When prompted with the above input, the model generates the answerable or unanswerable output.
32
+
33
+
34
+ ## Quickstart Example
35
+
36
+ Use the code below to get started with the model.
37
+
38
+ ```
39
+ import torch
40
+ from transformers import AutoTokenizer, AutoModelForCausalLM
41
+ from peft import PeftModel
42
+ from peft import PeftModelForCausalLM as lora_model
43
+
44
+ device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
45
+
46
+ ANSWERABILITY_PROMPT = "<|start_of_role|>answerability<|end_of_role|>"
47
+ BASE_NAME = "ibm-granite/granite-3.2-8b-instruct"
48
+ LORA_NAME = "ibm-granite/granite-rag-answerability-prediction-3.2-8b-lora"
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained(BASE_NAME, padding_side='left',trust_remote_code=True)
51
+ model_base = AutoModelForCausalLM.from_pretrained(BASE_NAME,device_map="auto")
52
+ model_answerability = PeftModel.from_pretrained(model_base, LORA_NAME)
53
+
54
+ convo = [
55
+ {"role": "user", "content": "name the viceroy at the time of quit india movement"},
56
+ {"role": "assistant", "content": "The Viceroy at the time of the Quit India Movement was Lord Linlithgow."},
57
+ {"role": "user", "content": "opposition leader for the movement"},
58
+ {"role": "assistant", "content": "A movement was launched at the Bombay session of the All - India Congress Committee by Mahatma Gandhi on 8 August 1942. Many students paid attention to Subhas Chandra Bose as a leader, who was in exile. He organized the Indian Legion in Germany , reorganized the Indian National Army with Japanese assistance , and soliciting help from the Axis Powers , conducted a guerrilla war against the British authorities."},
59
+ {"role": "user", "content": "who is Patel?"},
60
+ {"role": "assistant", "content": "Vallabhbhai Patel, popularly known as Sardar Patel , was the first Deputy Prime Minister of India . He was an Indian barrister and statesman , a senior leader of the Indian National Congress and a founding father of the Republic of India who played a leading role in the country 's struggle for independence and guided its integration into a united , independent nation . He acted as de facto Supreme Commander - in - chief of Indian army during the political integration of India and the Indo - Pakistani War of 1947 . He is credited by historians with ensuring the success of the rebellion across India. He was arrested and was imprisoned with the entire Congress Working Committee from 1942 to 1945"},
61
+ {"role": "user", "content": "how do you pronounce Vallabhbhai?"},
62
+ ]
63
+
64
+
65
+ documents = [
66
+ {'text': "Vallabhbhai Patel\nAmong Patel 's surviving family , Maniben Patel lived in a flat in Mumbai for the rest of her life following her father 's death ; she often led the work of the Sardar Patel Memorial Trust , which organises the prestigious annual Sardar Patel Memorial Lectures , and other charitable organisations . Dahyabhai Patel was a businessman who was elected to serve in the Lok Sabha ( the lower house of the Indian Parliament ) as an MP in the 1960s ."},
67
+ {'text': "Vallabhbhai Patel\nPatel 's date of birth was never officially recorded ; Patel entered it as 31 October on his matriculation examination papers . He belonged to the Leuva Patel Patidar community of Central Gujarat , although the Leuva Patels and Kadava Patels have also claimed him as one of their own ."},
68
+ {'text': "Vallabhbhai Patel\nIn April 2015 the Government of India declassified surveillance reports suggesting that Patel , while Home Minister , and Nehru were among officials involved in alleged government - authorised spying on the family of Subhas Chandra Bose ."}
69
+ ]
70
+
71
+ convo = [{"role":"system", "content": ""}] +convo
72
+
73
+ string = tokenizer.apply_chat_template(convo,documents=documents, tokenize=False,add_generation_prompt=False)
74
+ string_to_remove = tokenizer.apply_chat_template(convo[0:1], tokenize=False,add_generation_prompt=False)
75
+ string = string[len(string_to_remove):]
76
+ inputs = string + ANSWERABILITY_PROMPT
77
+
78
+ inputT = tokenizer(inputs, return_tensors="pt")
79
+
80
+ output = model_answerability.generate(inputT["input_ids"].to(device), attention_mask=inputT["attention_mask"].to(device), max_new_tokens=3)
81
+ output_text = tokenizer.decode(output[0])
82
+ answer = output_text.split(ANSWERABILITY_PROMPT)[1]
83
+ print(answer)
84
+ ```
85
+
86
+ ## Training Details
87
+
88
+
89
+ ### Training Data
90
+
91
+ The training data uses the publicly available Government corpus from [MT-RAG](https://arxiv.org/pdf/2501.03468) as the source of documents. Based on this corpus, we constructed a dataset consisting of a mix of human-created and synthetically generated multi-turn conversations. It includes two types of examples: (1) Answerable queries, where the final user question can be answered based on the provided documents. These examples teach the adapter to recognize when sufficient information is present to support an answer. (2) Unanswerable queries, where the documents lack the necessary information to answer the final user query. We used Mixtral as an automatic judge to validate the answerability labels and filter out noisy samples.
92
+
93
+
94
+ #### Training Hyperparameters
95
+ The LoRA adapter was fine-tuned using PEFT under the following regime: rank = 32, learning rate = 5e-6, number of epochs = 25, with early stopping based on validation set, and 90/10 split between training and validation.
96
+
97
+ ## Evaluation
98
+
99
+ ### Answerability Classification
100
+ We evaluated the model against baselines on binary answerability classification using two separate benchmarks:
101
+
102
+ - Single-turn Setting ([SQUADRun Benchmark](https://aclanthology.org/P18-2124.pdf)): In this setting, the user query and the supporting documents are provided. Our model was evaluated against standard baselines to measure its ability to determine whether a standalone question is answerable based on the document set.
103
+
104
+
105
+ | | unanswerable | | | answerable | | | Classification Accuracy | Weighted F1 |
106
+ |:------------------------------------------------------:|:-------------------:|:-------------:|:---------------:|:-----------------:|:-------------:|:---------------:|:----------------------------------:|:---------------------------------:|
107
+ | | Precision | Recall | F1 | Precision | Recall | F1 | | |
108
+ | BigBird (pre-trained embeddings) w/ MLP | 49.2 | 68.5 | 57.3 | 48 | 29.2 | 36.3 | 48.9 | 46.8 |
109
+ | llama2-7b as classifier (Full SFT) | 72.2 | 71 | 71.6 | 71.4 | 72.6 | 72 | 71.8 | 71.8 |
110
+ | Granite 3.2-8b LoRA | 84.2 | 68 | 75.2 | 73.1 | 87.2 | 79.5 | 77.6 | 77.4 |
111
+
112
+
113
+ - Multi-turn Setting (MT-RAG Benchmark): In this setting, the model is given the full multi-turn conversation history along with the supporting documents. This benchmark evaluates the model's ability to assess answerability when the final user query can also depend on prior turns for context.
114
+
115
+
116
+ | | unanswerable | | | answerable | | | Classification Accuracy | Weighted F1 Score |
117
+ |:------------------------------------------------------:|:-------------------:|:-------------:|:---------------:|:-----------------:|:-------------:|:---------------:|:----------------------------------:|:---------------------------------:|
118
+ | | Precision | Recall | F1 | Precision | Recall | F1 | | |
119
+ | BigBird (pre-trained embeddings) w/ MLP | 69.6 | 77.6 | 73.4 | 70.1 | 60.8 | 65.2 | 69.8 | 69.6 |
120
+ | llama2-7b as classifier (Full SFT) | 86.9 | 89.4 | 88.2 | 87.3 | 84.5 | 85.9 | 87.1 | 87.1 |
121
+ | Granite 3.2-8b LoRA | 85.4 | 89.3 | 87.3 | 87 | 82.4 | 84.6 | 86.1 | 86.1 |
122
+
123
+
124
+ ### Comparing LoRA Adapter vs. Vanilla Granite for Answer Quality
125
+ We compare the performance of Granite 3.2-8b Instruct vs. Granite 3.2-8b LoRA adapter on a subset of MT-RAG Benchmark. In this setup, each query is paired with only 5 retrieved passages as context.
126
+
127
+ - Answerability Classification Performance: The LoRA adapter outperforms the vanilla model in overall F1 on both answerables and unanswerables. The LoRA adapter achieves higher recall on unanswerable queries, making it better at identifying questions that should not be answered. However, this comes at the cost of lower recall on answerable queries.
128
+
129
+ - The RAGAS Faithfulness score (on truly answerable queries) drops slightly with the LoRA adapter. This is not due to degraded generation quality, but rather because the model labels more truly answerable queries as unanswerable and abstains from answering.
130
+
131
+ - Joint Answerability-Faithfulness Score computed as:
132
+ > = 1 (if model prediction = IDK/unanswerable ∩ ground truth = unanswerable)
133
+
134
+ > = RAGAS Faithfulness (if model prediction = IDK/answerable ∩ ground truth = answerable)
135
+
136
+ > = 0 (otherwise)
137
+
138
+ This score rewards the model for correctly abstaining on unanswerable queries (full credit) and for providing faithful answers on answerable queries (partial credit based on RAGAS Faithfulness). No credit is given for incorrect or unfaithful predictions.
139
+
140
+
141
+ The LoRA adapter achieves a 7\% lift on this metric - rewarding the model for correctly abstaining on unanswerable queries and for being faithful when it chooses to answer.
142
+
143
+ | | F1 Score Unanswerable | F1 Score Answerable | Recall Unanswerable | Recall Answerable | Ragas Faithfulness (on Truly Answerable) | Joint Answerability- Faithfulness Score |
144
+ |:--------------------------------:|:----------------------------------:|:--------------------------------:|:------------------------------:|:----------------------------:|:--------------------------------------------------------:|:------------------------------------------------------:|
145
+ | Granite 3.2-8b Instruct | 14 | 76 | 8 | 97 | 75 | 50 |
146
+ | Granite 3.2-8b LoRA | 47 | 77 | 37 | 88 | 70 | 57 |
147
+
148
+
149
+ ## Model Card Authors
150
+
151
+ [Vraj Shah](mailto:[email protected])
152
+
153
+ ### Framework versions
154
+
155
+ - PEFT 0.14.0
adapter_config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "ibm-granite/granite-3.2-8b-instruct",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 32,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.05,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": null,
21
+ "peft_type": "LORA",
22
+ "r": 32,
23
+ "rank_pattern": {},
24
+ "revision": null,
25
+ "target_modules": [
26
+ "v_proj",
27
+ "q_proj",
28
+ "k_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM",
31
+ "use_dora": false,
32
+ "use_rslora": false,
33
+
34
+ "model_type": "granite"
35
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dddf86fe562228b248556ee9807b84656da6f66e1a13bb50b418d0e616fc3a23
3
+ size 94404160