Revisiting In-Context Learning with Long Context Language Models
Abstract
In-Context Learning (ICL) is a technique by which language models make predictions based on examples provided in their input context. Previously, their context window size imposed a limit on the number of examples that can be shown, making example selection techniques crucial for identifying the maximally effective set of examples. However, the recent advent of Long Context Language Models (LCLMs) has significantly increased the number of examples that can be included in context, raising an important question of whether ICL performance in a many-shot regime is still sensitive to the method of sample selection. To answer this, we revisit these approaches in the context of LCLMs through extensive experiments on 18 datasets spanning 4 tasks. Surprisingly, we observe that sophisticated example selection techniques do not yield significant improvements over a simple random sample selection method. Instead, we find that the advent of LCLMs has fundamentally shifted the challenge of ICL from that of selecting the most effective examples to that of collecting sufficient examples to fill the context window. Specifically, in certain datasets, including all available examples does not fully utilize the context window; however, by augmenting the examples in context with a simple data augmentation approach, we substantially improve ICL performance by 5%.
Community
We observe a new paradigm shift from example selection to context utilization for in-context learning with long context language models, with a simple yet effective data augmentation approach to boost their performance and comprehensive analysis of their performance on long context-related factors.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Retrieval or Global Context Understanding? On Many-Shot In-Context Learning for Long-Context Evaluation (2024)
- Reducing Distraction in Long-Context Language Models by Focused Learning (2024)
- RARe: Retrieval Augmented Retrieval with In-Context Examples (2024)
- LIFT: Improving Long Context Understanding Through Long Input Fine-Tuning (2024)
- PICLe: Pseudo-Annotations for In-Context Learning in Low-Resource Named Entity Detection (2024)
- Improving In-Context Learning with Small Language Model Ensembles (2024)
- PromptRefine: Enhancing Few-Shot Performance on Low-Resource Indic Languages with Example Selection from Related Example Banks (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper