Eliciting Fine-Tuned Transformer Capabilities via Inference-Time Techniques
Abstract
Transformers can approximate supervised fine-tuning capabilities through in-context learning without altering model parameters, supported by theoretical bounds and practical techniques.
Large language models have transformed natural language processing, yet supervised fine-tuning (SFT) remains computationally intensive. This paper formally proves that capabilities acquired through SFT can be approximated by a base transformer model using inference-time techniques, specifically in-context learning (ICL), without altering model parameters, under idealized assumptions including unbounded computational resources and access to the fine-tuning dataset. We extend these results to practical scenarios with finite context lengths and partial dataset access. For text generation tasks with fixed output length l, datasets of size Oleft( m V{varepsilon^2} log m{delta} right) or, with bounded context, Oleft( l log V{varepsilon^2} log 1{delta} right) suffice to approximate fine-tuned behavior across m contexts within error varepsilon, where V is the vocabulary size and delta is the failure probability. For linear classification, datasets of size Oleft( d{varepsilon} right) or, with fixed context, Oleft( 1{varepsilon^2} log 1{delta} right) are sufficient, where d is the input dimension. Grounded in the Turing completeness of transformers, these results provide a theoretical foundation for resource-efficient deployment of large language models, with practical techniques like retrieval-augmented generation bridging theory to real-world applications.
Community
This paper provides the first formal proof that base transformer models can approximate fine-tuned capabilities using only inference-time techniques like in-context learning (ICL) - no parameter updates needed! 🎯
Key theoretical contributions:
- Proves ICL can match SFT performance within quantifiable error bounds
- Derives minimal dataset requirements: O(mV/ε²) for text generation, O(d/ε) for classification
- Grounded in transformer Turing completeness
Real-world bridge: Connects theory to practical techniques like RAG and few-shot prompting that teams already use.
While the assumptions are idealized (unbounded compute, full dataset access), the bounded context results (Theorems 4-5) provide actionable guidance for modern LLMs with finite context windows.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- From Parameters to Prompts: Understanding and Mitigating the Factuality Gap between Fine-Tuned LLMs (2025)
- Adaptive Task Vectors for Large Language Models (2025)
- Curse of High Dimensionality Issue in Transformer for Long-context Modeling (2025)
- Enhancing Complex Instruction Following for Large Language Models with Mixture-of-Contexts Fine-tuning (2025)
- Text-to-LoRA: Instant Transformer Adaption (2025)
- Beyond In-Context Learning: Aligning Long-form Generation of Large Language Models via Task-Inherent Attribute Guidelines (2025)
- QwenLong-CPRS: Towards $\infty$-LLMs with Dynamic Context Optimization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper