filename
stringlengths 9
127
| text
stringlengths 133
11k
|
---|---|
2310.15154.pdf | Pre-publication draft
LINEAR REPRESENTATIONS OF SENTIMENT
INLARGE LANGUAGE MODELS
Curt Tigges*♣, Oskar John Hollinsworth*♡, Atticus Geiger♠⋆, Neel Nanda♢
♣EleutherAI Institute,♡SERI MATS,♠Stanford University,⋆Pr(Ai)2R Group,♢Independent
*Equal primary authors (order random)
ABSTRACT
Sentiment is a pervasive feature in natural language text, yet it is an open question
how sentiment is represented within Large Language Models (LLMs). In this
study, we reveal that across a range of models, sentiment is represented linearly:
a single direction in activation space mostly captures the feature across a range
of tasks with one extreme for positive and the other for negative. Through causal
interventions, we isolate this direction and show it is causally relevant in both toy
tasks and real world datasets such as Stanford Sentiment Treebank. Through this
case study we model a thorough investigation of what a single direction means on
a broad data distribution.
We further uncover the mechanisms that involve this direction, highlighting the
roles of a small subset of attention heads and neurons. Finally, we discover a
phenomenon which we term the summarization motif: sentiment is not solely
represented on emotionally charged words, but is additionally summarised at in-
termediate positions without inherent sentiment, such as punctuation and names.
We show that in Stanford Sentiment Treebank zero-shot classification, 76% of
above-chance classification accuracy is lost when ablating the sentiment direction,
nearly half of which (36%) is due to ablating the summarized sentiment direction
exclusively at comma positions.
1 I NTRODUCTION
Large language models (LLMs) have displayed increasingly impressive capabilities (Brown et al.,
2020; Radford et al., 2019; Bubeck et al., 2023), but their internal workings remain poorly un-
derstood. Nevertheless, recent evidence (Li et al., 2023) has suggested that LLMs are capable of
forming models of the world, i.e., inferring hidden variables of the data generation process rather
than simply modeling surface word co-occurrence statistics. There is significant interest (Christiano
et al. (2021), Burns et al. (2022)) in deciphering the latent structure of such representations.
In this work, we investigate how LLMs represent sentiment, a variable in the data generation process
that is relevant and interesting across a wide variety of language tasks (Cui et al., 2023). Approach-
ing our investigations through the frame of causal mediation analysis (Vig et al., 2020; Pearl, 2022;
Geiger et al., 2023a), we show that these sentiment features are represented linearly by the models,
are causally significant, and are utilized by human-interpretable circuits (Olah et al., 2020; Elhage
et al., 2021a).
We find the existence of a single direction scientifically interesting as further evidence for the linear
representation hypothesis (Mikolov et al., 2013; Elhage et al., 2022)– that models tend to extract
properties of the input and internally represent them as directions in activation space. Understanding
the structure of internal representations is crucial to begin to decode them, and linear representations
are particularly amenable to detailed reverse-engineering (Nanda et al., 2023b).
We show evidence of a phenomenon we have labeled the “summarization motif”, where rather
than sentiment being directly moved from valenced tokens to the final token, it is first aggregated on
intermediate summarization tokens without inherent valence such as commas, periods and particular
nouns.1This summarization structure for next token prediction can be seen as a naturally emerging
1Our use of the term “summarization” is distinct from typical NLP summarization tasks
1arXiv:2310.15154v1 [cs.LG] 23 Oct 2023 |
2212.10559.pdf | Why Can GPT Learn In-Context?
Language Models Secretly Perform Gradient Descent as Meta-Optimizers
Damai Dai†∗, Yutao Sun∥∗, Li Dong‡, Yaru Hao‡, Zhifang Sui†, Furu Wei‡
†Peking University∥Tsinghua University
‡Microsoft Research
https://github.com/microsoft/LMOps
Abstract
Large pretrained language models have shown
surprising In-Context Learning (ICL) ability.
With a few demonstration input-label pairs,
they can predict the label for an unseen input
without additional parameter updates. Despite
the great success in performance, the work-
ing mechanism of ICL still remains an open
problem. In order to better understand how
ICL works, this paper explains language mod-
els as meta-optimizers and understands ICL
as a kind of implicit finetuning. Theoreti-
cally, we figure out that the Transformer at-
tention has a dual form of gradient descent
based optimization. On top of it, we under-
stand ICL as follows: GPT first produces meta-
gradients according to the demonstration ex-
amples, and then these meta-gradients are ap-
plied to the original GPT to build an ICL
model. Experimentally, we comprehensively
compare the behavior of ICL and explicit fine-
tuning based on real tasks to provide empiri-
cal evidence that supports our understanding.
The results prove that ICL behaves similarly
to explicit finetuning at the prediction level,
the representation level, and the attention be-
havior level. Further, inspired by our under-
standing of meta-optimization, we design a
momentum-based attention by analogy with
the momentum-based gradient descent algo-
rithm. Its consistently better performance over
vanilla attention supports our understanding
again from another aspect, and more impor-
tantly, it shows the potential to utilize our un-
derstanding for future model designing.
1 Introduction
In recent years, large pretrained language models,
especially in Transformer-based architectures (e.g.,
GPT; Brown et al. 2020), have shown strong emer-
gent In-Context Learning (ICL) ability. Different
from finetuning which needs additional parame-
ter updates, ICL just needs several demonstration
∗Contribution during internship at Microsoft Research.
(Sentence, ?)…
Demonstration Examples Query ExampleFeed -Forward Network
Self-Attention
(Sentence1, Answer1) (Sentence2, Answer2)Meta -GradientsAnswer
… GPTIn-Context LearningFinetuning
GPT
(Sentence1, Answer1) GPT
(Sentence2, Answer2)
Back-Propagation
Forward
Computation𝚫𝚫𝑾𝑾𝐅𝐅𝐅𝐅
𝚫𝚫𝑾𝑾𝐈𝐈𝐈𝐈𝐈𝐈Gradients
Dual
ViewFigure 1: According to the demonstration examples,
GPT produces meta-gradients for In-Context Learn-
ing (ICL) through forward computation. ICL works by
applying these meta-gradients to the model through at-
tention. The meta-optimization process of ICL shares
a dual view with finetuning that explicitly updates the
model parameters with back-propagated gradients.
examples prepended before the original input, and
then the model can predict the label for even unseen
inputs. On numerous downstream tasks, a large
GPT model can achieve a quite great performance,
which even exceeds some smaller models with su-
pervised finetuning. However, although ICL has
achieved great success in performance, the work-
ing mechanism of it is still an open problem to be
investigated.
In this paper, we explain ICL as a process of
meta-optimization and attempt to build connections
between GPT-based ICL and finetuning. Concen-
trating on the attention modules, we figure out that
the Transformer attention has a dual form of gra-
dient descent based optimization. On top of it, wearXiv:2212.10559v2 [cs.CL] 21 Dec 2022 |
2306.00297.pdf | Transformers learn to implement preconditioned gradient descent
for in-context learning
Kwangjun Ahn1,3,*, Xiang Cheng1,3,*, Hadi Daneshmand2,3,*, and Suvrit Sra1,3
1Department of Electrical Engineering and Computer Science, MIT
2Foundations of Data Science Institute (FODSI)
3Laboratory for Information and Decision Systems (LIDS), MIT
June 2, 2023
Abstract
Motivated by the striking ability of transformers for in-context learning, several works
demonstrate that transformers can implement algorithms like gradient descent. By a careful
construction of weights, these works show that multiple layers of transformers are expressive
enough to simulate gradient descent iterations. Going beyond the question of expressivity, we ask:
Can transformers learn to implement such algorithms by training over random problem instances?
To our knowledge, we make the first theoretical progress toward this question via analysis of the
loss landscape for linear transformers trained over random instances of linear regression. For
a single attention layer, we prove the global minimum of the training objective implements a
single iteration of preconditioned gradient descent. Notably, the preconditioning matrix not only
adapts to the input distribution but also to the variance induced by data inadequacy. For a
transformer with kattention layers, we prove certain critical points of the training objective
implement kiterations of preconditioned gradient descent. Our results call for future theoretical
studies on learning algorithms by training transformers.
1 Introduction
In-context learning (ICL) is the striking capability of large language models to adapt to structure in
inputs: given a prompt containing examples and a query, the transformer produces the correct output
based on the context provided by the examples, without needing to update its parameters [Brown
et al., 2020, Lieber et al., 2021, Rae et al., 2021, Black et al., 2022]. This property has become
the focus of body of recent research that aims to shed light on the underlying mechanism of large
language models [Garg et al., 2022, Akyürek et al., 2022, von Oswald et al., 2022, Li and Malik,
2017, Min et al., 2021, Xie et al., 2021, Elhage et al., 2021, Olsson et al., 2022].
A line of research studies ICL via the expressive power of transformers. Transformer architectures
are powerful Turing machines, capable of implementing various algorithms [Pérez et al., 2021, Wei
et al., 2022]. Given an in-context prompt, Edelman et al. [2022], Olsson et al. [2022] argue that
transformers are able to implement algorithms through the recurrence of multi-head attentions to
extract coarse information from raw input prompts. Akyürek et al. [2022], von Oswald et al. [2022]
assert that transformers can implement gradient descent on linear regression encoded in a given
input prompt.
*Equal contribution, alphabetical order.
1arXiv:2306.00297v1 [cs.LG] 1 Jun 2023 |
2105.14368.pdf | Fit without fear: remarkable mathematical
phenomena of deep learning through the prism of
interpolation
Mikhail Belkin
Halicio˘ glu Data Science Institute,
University of California San Diego
La Jolla, USA
In memory of Partha Niyogi, a thinker, a teacher, and a dear friend.
Abstract
In the past decade the mathematical theory of machine learning has
lagged far behind the triumphs of deep neural networks on practical chal-
lenges. However, the gap between theory and practice is gradually starting
to close. In this paper I will attempt to assemble some pieces of the remark-
able and still incomplete mathematical mosaic emerging from the efforts
to understand the foundations of deep learning. The two key themes will
be interpolation, and its sibling, over-parameterization. Interpolation cor-
responds to fitting data, even noisy data, exactly. Over-parameterization
enables interpolation and provides flexibility to select a right interpolating
model.
As we will see, just as a physical prism separates colors mixed within
a ray of light, the figurative prism of interpolation helps to disentangle
generalization and optimization properties within the complex picture of
modern Machine Learning. This article is written with belief and hope that
clearer understanding of these issues brings us a step closer toward a general
theory of deep learning and machine learning.
Contents
1 Preface 2
2 Introduction 3
1arXiv:2105.14368v1 [stat.ML] 29 May 2021 |
2306.09927.pdf | arXiv:2306.09927v1 [stat.ML] 16 Jun 2023Trained Transformers Learn Linear Models In-Context
Ruiqi Zhang
UC Berkeley
[email protected] Frei
UC Berkeley
[email protected]
Peter L. Bartlett
UC Berkeley and Google DeepMind
[email protected]
June 19, 2023
Abstract
Attention-based neural networks such as transformers have demonstrated a remarkable ability to ex-
hibit in-context learning (ICL): Given a short prompt seque nce of tokens from an unseen task, they can
formulate relevant per-token and next-token predictions w ithout any parameter updates. By embedding a
sequence of labeled training data and unlabeled test data as a prompt, this allows for transformers to be-
have like supervised learning algorithms. Indeed, recent w ork has shown that when training transformer
architectures over random instances of linear regression p roblems, these models’ predictions mimic those
of ordinary least squares.
Towards understanding the mechanisms underlying this phen omenon, we investigate the dynamics of
ICL in transformers with a single linear self-attention lay er trained by gradient flow on linear regression
tasks. We show that despite non-convexity, gradient flow wit h a suitable random initialization finds a
global minimum of the objective function. At this global min imum, when given a test prompt of labeled
examples from a new prediction task, the transformer achiev es prediction error competitive with the
best linear predictor over the test prompt distribution. We additionally characterize the robustness of
the trained transformer to a variety of distribution shifts and show that although a number of shifts are
tolerated, shifts in the covariate distribution of the prom pts are not. Motivated by this, we consider
a generalized ICL setting where the covariate distribution s can vary across prompts. We show that
although gradient flow succeeds at finding a global minimum in this setting, the trained transformer is
still brittle under mild covariate shifts.
1 Introduction
Transformer-based neural networks have quickly become the default machine learning model for problems
in natural language processing, forming the basis of chatbo ts like ChatGPT [ Ope23 ], and are increasingly
popular in computer vision [ Dos+21 ]. These models can take as input sequences of tokens and retu rn
relevant next-token predictions. When trained on sufficien tly large and diverse datasets, these models are
often able to perform in-context learning (ICL): when given a short sequence of input-output pairs (ca lled a
prompt ) from a particular task as input, the model can formulate pre dictions on test examples without having
to make any updates to the parameters in the model.
1 |
2310.15418.pdf | Fractal Landscapes in Policy Optimization
Tao Wang
UC San Diego
[email protected] Herbert
UC San Diego
[email protected] Gao
UC San Diego
[email protected]
Abstract
Policy gradient lies at the core of deep reinforcement learning (RL) in continuous
domains. Despite much success, it is often observed in practice that RL training
with policy gradient can fail for many reasons, even on standard control problems
with known solutions. We propose a framework for understanding one inherent
limitation of the policy gradient approach: the optimization landscape in the policy
space can be extremely non-smooth or fractal for certain classes of MDPs, such
that there does not exist gradient to be estimated in the first place. We draw on
techniques from chaos theory and non-smooth analysis, and analyze the maximal
Lyapunov exponents and Hölder exponents of the policy optimization objectives.
Moreover, we develop a practical method that can estimate the local smoothness
of objective function from samples to identify when the training process has
encountered fractal landscapes. We show experiments to illustrate how some
failure cases of policy optimization can be explained by such fractal landscapes.
1 Introduction
Deep reinforcement learning has achieved much success in various applications [ 23,30,38], but they
also often fail, especially in continuous spaces, on control problems that other methods can readily
solve. The understanding of such failure cases is still limited. For instance, the training process
of reinforcement learning is unstable and the learning curve can fluctuate during training in ways
that are hard to predict. The probability of obtaining satisfactory policies can also be inherently
low in reward-sparse or highly nonlinear control tasks. Existing analysis of the failures focuses
on limitations of the sampling and optimization algorithms, such as function approximation errors
[35,39], difficulty in data collection [ 34], and aggressive updates in the policy space [ 28]. There has
not been much study of potentially deeper causes of failures that may be inherent in the formulation
of policy optimization problems.
Motivated by the common observation that small updates in the policy parameters can significantly
change the performance, we analyze the smoothness of the optimization landscapes in policy opti-
mization. Drawing on chaos theory, we introduce the concept of maximal Lyapunov exponent (MLE)
[17] to the RL setting to measure the exponential rate of trajectory divergence in MDP. It seems
contradictory that a trajectory in chaotic systems can be both exponentially divergent and uniformly
bounded at the same time, and we will show that these two conflicting facts combine to yield the
fractal structure in the optimization landscape. Intuitively, the objective function is non-differentiable
when the rate of trajectory divergence exceeds the decay rate of discount factor. Furthermore, this
finding indicates that the fluctuations observed in the loss curve are not just due to the numerical or
sampling error but rather reflect the intrinsic properties of the corresponding MDP.
We should emphasize that the fractal landscapes that we will demonstrate are stronger than various
existing results on the non-smoothness [ 2,7]. Most nonsmooth objectives that have been studied still
assume is local Lipschitz continuity or piecewise smoothness that implies differentiability almost
everywhere (such as f(x) =|x|). Instead, by showing that the loss landscape of policy optimization
can be fractal, we demonstrate the absence of descent directions, which causes the failure of first-order
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2310.15418v1 [cs.LG] 24 Oct 2023 |
2205.14135.pdf | FlashAttention : Fast and Memory-Efficient Exact Attention
with IO-Awareness
Tri Daoy, Daniel Y. Fuy, Stefano Ermony, Atri Rudraz, and Christopher Réy
yDepartment of Computer Science, Stanford University
zDepartment of Computer Science and Engineering, University at Buffalo, SUNY
{trid,danfu}@cs.stanford.edu ,[email protected] ,[email protected] ,
[email protected]
June 24, 2022
Abstract
Transformers are slow and memory-hungry on long sequences, since the time and memory complexity
of self-attention are quadratic in sequence length. Approximate attention methods have attempted
to address this problem by trading off model quality to reduce the compute complexity, but often do
not achieve wall-clock speedup. We argue that a missing principle is making attention algorithms IO-
aware—accounting for reads and writes between levels of GPU memory. We propose FlashAttention ,
an IO-aware exact attention algorithm that uses tiling to reduce the number of memory reads/writes
between GPU high bandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity
ofFlashAttention , showing that it requires fewer HBM accesses than standard attention, and is
optimal for a range of SRAM sizes. We also extend FlashAttention to block-sparse attention, yielding
an approximate attention algorithm that is faster than any existing approximate attention method.
FlashAttention trains Transformers faster than existing baselines: 15% end-to-end wall-clock speedup
on BERT-large (seq. length 512) compared to the MLPerf 1.1 training speed record, 3 speedup on
GPT-2 (seq. length 1K), and 2.4 speedup on long-range arena (seq. length 1K-4K). FlashAttention
and block-sparse FlashAttention enable longer context in Transformers, yielding higher quality models
(0.7 better perplexity on GPT-2 and 6.4 points of lift on long-document classification) and entirely new
capabilities: the first Transformers to achieve better-than-chance performance on the Path-X challenge
(seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1% accuracy).
1 Introduction
Transformer models [ 82] have emerged as the most widely used architecture in applications such as natural
language processing and image classification. Transformers have grown larger [ 5] and deeper [ 83], but
equipping them with longer context remains difficult [ 80], since the self-attention module at their heart
has time and memory complexity quadratic in sequence length. An important question is whether making
attention faster and more memory-efficient can help Transformer models address their runtime and memory
challenges for long sequences.
Many approximate attention methods have aimed to reduce the compute and memory requirements of
attention. These methods range from sparse-approximation [ 51,74] to low-rank approximation [ 12,50,84],
and their combinations [ 3,9,92]. Although these methods reduce the compute requirements to linear or
near-linear in sequence length, many of them do not display wall-clock speedup against standard attention
and have not gained wide adoption. One main reason is that they focus on FLOP reduction (which may not
correlate with wall-clock speed) and tend to ignore overheads from memory access (IO).
In this paper, we argue that a missing principle is making attention algorithms IO-aware [1]—that is,
carefully accounting for reads and writes to different levels of fast and slow memory (e.g., between fast GPU
on-chip SRAM and relatively slow GPU high bandwidth memory, or HBM [ 45], Figure 1 left). On modern
1arXiv:2205.14135v2 [cs.LG] 23 Jun 2022 |
bayesian-interactive-optimization.pdf | Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2010)
M. Otaduy and Z. Popovic (Editors)
A Bayesian Interactive Optimization Approach to Procedural
Animation Design
Eric Brochu Tyson Brochu Nando de Freitas
University of British Columbia
Abstract
The computer graphics and animation fields are filled with applications that require the setting of tricky param-eters. In many cases, the models are complex and the parameters unintuitive for non-experts. In this paper, wepresent an optimization method for setting parameters of a procedural fluid animation system by showing the userexamples of different parametrized animations and asking for feedback. Our method employs the Bayesian tech-nique of bringing in “prior” belief based on previous runs of the system and/or expert knowledge, to assist users infinding good parameter settings in as few steps as possible. To do this, we introduce novel extensions to Bayesianoptimization, which permit effective learning for parameter-based procedural animation applications. We showthat even when users are trying to find a variety of different target animations, the system can learn and improve.We demonstrate the effectiveness of our method compared to related active learning methods. We also present aworking application for assisting animators in the challenging task of designing curl-based velocity fields, evenwith minimal domain knowledge other than identifying when a simulation “looks right”.
Categories and Subject Descriptors
(according to ACM CCS) : Learning [I.2.6]: Parameter Learning.—User Inter-
faces [H.5.2]: Interaction Styles.—Three-Dimensional Graphics and Realism [I.3.7]: Animation.—
1 Introduction
Procedural methods for generating animation have long been
used by visual effects and games studios due to their effi-ciency and artist controllability. However, this control comeswith a cost: a set of often unintuitive parameters confrontsthe user of a procedural animation system. The desired endresult is often identifiable by the user, but these parametersmust be tuned in a tedious trial-and-error process.
For example, realistic animation of smoke can be achieved
by driving a particle system through a simple combinationof vortex rings and curl noise [BHN07]. However even thesetwo relatively simple procedural methods are influenced byseveral parameters: The velocity, radius and magnitude ofthe vortex rings, and the length scale and magnitude of thecurl noise. Adding more procedural “flow primitives”, suchas uniform and vortical flows, sources and sinks [WH91],turbulent wind [SF93], vortex particles [SRF05], and vortexfilaments [AN05] can produce a wider variety of animations,but each of these primitives carries its own set of associatedparameters. These parameters can interact in subtle and non-intuitive ways, and small adjustments to certain settings mayresult in non-uniform changes in the appearance.
Brochu et al. [BGdF07, BdFG07] propose a Bayesian op-
timization technique to assist artists with parameter tuningfor bidirectional reflectance distribution functions (BRDF s).
In their iterative scheme, the algorithm selects two sets ofparameters and generates example images from them. Theuser selects the preferred image and the algorithm incorpo-rates this feedback to learn a model of the user’s valuation
function over the domain of parameter values. Given thisvaluation function, the algorithm is able to select parame-ters to generate simulations that are likely to be closer to theones wanted by the artist. The process is repeated until theuser is satisfied with the results.
During the development of a procedural smoke anima-
tion system, we found ourselves with a parameterized sys-tem with 12 continuous parameters. Setting these was a chal-lenge for the developers, let alone other users, so we lookedto adapt [BdFG07]. In the process, though, we found that themodel as presented was unsuitable for our procedural anima-tion. In particular, we identified several limitations:
c⃝The Eurographics Association 2010.
DOI: 10.2312/SCA/SCA10/103-112 |
Introduction to Probabilistic Topic Models.pdf | Introduction to Probabilistic Topic Models
David M. Blei
Princeton University
Abstract
Probabilistic topic models are a suite of algorithms whose aim is to discover the
hidden thematic structure in large archives of documents. In this article, we review the
main ideas of this field, survey the current state-of-the-art, and describe some promising
future directions. We first describe latent Dirichlet allocation (LDA) [ 8], which is the
simplest kind of topic model. We discuss its connections to probabilistic modeling,
and describe two kinds of algorithms for topic discovery. We then survey the growing
body of research that extends and applies topic models in interesting ways. These
extensions have been developed by relaxing some of the statistical assumptions of LDA,
incorporating meta-data into the analysis of the documents, and using similar kinds
of models on a diversity of data types such as social networks, images and genetics.
Finally, we give our thoughts as to some of the important unexplored directions for
topic modeling. These include rigorous methods for checking models built for data
exploration, new approaches to visualizing text and other high dimensional data, and
moving beyond traditional information engineering applications towards using topic
models for more scientific ends.
1I n t r o d u c t i o n
As our collective knowledge continues to be digitized and stored—in the form of news, blogs,
web pages, scientific articles, books, images, sound, video, and social networks—it becomes
more di cult to find and discover what we are looking for. We need new computational tools
to help organize, search and understand these vast amounts of information.
Right now, we work with online information using two main tools—search and links. We
type keywords into a search engine and find a set of documents related to them. We look at
the documents in that set, possibly navigating to other linked documents. This is a powerful
way of interacting with our online archive, but something is missing.
Imagine searching and exploring documents based on the themes that run through them. We
might “zoom in” and “zoom out” to find specific or broader themes; we might look at how
those themes changed through time or how they are connected to each other. Rather than
1 |
GPT-2.pdf | Language Models are Unsupervised Multitask Learners
Alec Radford*1Jeffrey Wu*1Rewon Child1David Luan1Dario Amodei**1Ilya Sutskever**1
Abstract
Natural language processing tasks, such as ques-
tion answering, machine translation, reading com-
prehension, and summarization, are typically
approached with supervised learning on task-
specific datasets. We demonstrate that language
models begin to learn these tasks without any ex-
plicit supervision when trained on a new dataset
of millions of webpages called WebText. When
conditioned on a document plus questions, the an-
swers generated by the language model reach 55
F1 on the CoQA dataset - matching or exceeding
the performance of 3 out of 4 baseline systems
without using the 127,000+ training examples.
The capacity of the language model is essential
to the success of zero-shot task transfer and in-
creasing it improves performance in a log-linear
fashion across tasks. Our largest model, GPT-2,
is a 1.5B parameter Transformer that achieves
state of the art results on 7 out of 8 tested lan-
guage modeling datasets in a zero-shot setting
but still underfits WebText. Samples from the
model reflect these improvements and contain co-
herent paragraphs of text. These findings suggest
a promising path towards building language pro-
cessing systems which learn to perform tasks from
their naturally occurring demonstrations.
1. Introduction
Machine learning systems now excel (in expectation) at
tasks they are trained for by using a combination of large
datasets, high-capacity models, and supervised learning
(Krizhevsky et al., 2012) (Sutskever et al., 2014) (Amodei
et al., 2016). Yet these systems are brittle and sensitive to
slight changes in the data distribution (Recht et al., 2018)
and task specification (Kirkpatrick et al., 2017). Current sys-
tems are better characterized as narrow experts rather than
*, **Equal contribution1OpenAI, San Francisco, Califor-
nia, United States. Correspondence to: Alec Radford
<[email protected] >.competent generalists. We would like to move towards more
general systems which can perform many tasks – eventually
without the need to manually create and label a training
dataset for each one.
The dominant approach to creating ML systems is to col-
lect a dataset of training examples demonstrating correct
behavior for a desired task, train a system to imitate these
behaviors, and then test its performance on independent
and identically distributed (IID) held-out examples. This
has served well to make progress on narrow experts. But
the often erratic behavior of captioning models (Lake et al.,
2017), reading comprehension systems (Jia & Liang, 2017),
and image classifiers (Alcorn et al., 2018) on the diversity
and variety of possible inputs highlights some of the short-
comings of this approach.
Our suspicion is that the prevalence of single task training
on single domain datasets is a major contributor to the lack
of generalization observed in current systems. Progress
towards robust systems with current architectures is likely
to require training and measuring performance on a wide
range of domains and tasks. Recently, several benchmarks
have been proposed such as GLUE (Wang et al., 2018) and
decaNLP (McCann et al., 2018) to begin studying this.
Multitask learning (Caruana, 1997) is a promising frame-
work for improving general performance. However, mul-
titask training in NLP is still nascent. Recent work re-
ports modest performance improvements (Yogatama et al.,
2019) and the two most ambitious efforts to date have
trained on a total of 10 and 17 (dataset, objective)
pairs respectively (McCann et al., 2018) (Bowman et al.,
2018). From a meta-learning perspective, each (dataset,
objective) pair is a single training example sampled
from the distribution of datasets and objectives. Current
ML systems need hundreds to thousands of examples to
induce functions which generalize well. This suggests that
multitask training many need just as many effective training
pairs to realize its promise with current approaches. It will
be very difficult to continue to scale the creation of datasets
and the design of objectives to the degree that may be re-
quired to brute force our way there with current techniques.
This motivates exploring additional setups for performing
multitask learning.
The current best performing systems on language tasks |
2647-elbo-ing-stein-mixtures.pdf | Under review as a conference paper at ICLR 2023
ELBO- INGSTEIN MIXTURES
Anonymous authors
Paper under double-blind review
ABSTRACT
Stein variational gradient descent (SVGD) (Liu & Wang, 2016) is a particle-based
technique for Bayesian inference. SVGD has recently gained popularity because it
combines the ability of variational inference to handle tall data with the modeling
power of non-parametric inference. Unfortunately, variance estimation scales
inversely with the dimensionality of the model leading to underestimation, meaning
more particles are required to represent high-dimensional models adequately. Stein
mixtures (Nalisnick & Smyth, 2017) alleviate the exponential growth in particles by
letting each particle parameterize a distribution. However, the inference algorithm
proposed by Nalisnick & Smyth (2017) can be numerically unstable. We show that
their algorithm corresponds to inference with the Rényi α-divergence for α= 0and
that using other values for αcan lead to a more stable inference. We empirically
study the performance of Stein mixtures inferred with different αvalues on various
real-world problems, demonstrating significantly and consistently improved results
when using α= 1, which corresponds to using the evidence lower bound (ELBO).
We call this instance of our algorithm ELBO-within-Stein. An easy-to-use version
of the inference algorithm (for arbitrary α∈R) is available in the deep probabilistic
programming language NumPyro (Phan et al., 2019).
1 I NTRODUCTION
The ability of Bayesian deep learning to quantify the uncertainty of predictions by deep models is
causing a surge of interest in using these techniques (Izmailov et al., 2021). Bayesian inference aims
to describe i.i.d. data D={xi}n
i=1using a model with latent a variable z. Bayesian inference does
this by computing a posterior distribution p(z|D)over the latent variable given a model describing
the joint distribution p(z,D) =p(D|z)p(z). We obtain the posterior by following Bayes’ theorem,
p(z|D) =Qn
i=1p(xi|z)p(z)/p(D),
where p(D) =R
zQn
i=1p(xi|z)p(z)dzis the normalization constant. For most practical models, the
normalization constant lacks an analytic solution or poses a computability problem, complicating the
Bayesian inference problem.
Stein variational gradient descent (SVGD) (Liu & Wang, 2016) is a recent technique for Bayesian
inference that uses a set of particles Z={zi}N
i=1to approximate the posterior p(z|D). The idea
behind SVGD is to iteratively transport Zaccording to a force field SZ, called the Stein force. The
Stein force is given by
SZ(zi) =Ezj∼qZ[k(zi,zj)∇zjlogp(zj|D) +∇zjk(zi,zj)], (1)
where k(·,·)is a reproducing kernel (Berlinet & Thomas-Agnan, 2011), qZ=N−1P
iδziis the
empirical measure on the set of particles Z,δx(y)represents the Dirac delta measure, which is equal
to1ifx=yand0otherwise, and ∇zjlogp(zj|D)is the gradient of the posterior with respect to
thej-th particle. The technique is scalable to tall data (i.e. datasets with many data points) and
offers the flexibility and scope of techniques such as Markov chain Monte Carlo (MCMC). SVGD is
good at capturing multi-modality (Liu & Wang, 2016; Wang & Liu, 2019), and has useful theoretical
interpretations such as a set of particles following a gradient flow (Liu, 2017) or in terms of the
properties of kernels (Liu & Wang, 2018).
The main problem is that SVGD suffers from the curse of dimensionality: variance estimation scales
inversely with dimensionality (Ba et al., 2021). Nalisnick & Smyth (2017) suggest resolving this by
1 |
1711.00165.pdf | Published as a conference paper at ICLR 2018
DEEPNEURAL NETWORKS AS GAUSSIAN PROCESSES
Jaehoon Lee∗†, Yasaman Bahri∗†, Roman Novak , Samuel S. Schoenholz,
Jeffrey Pennington, Jascha Sohl-Dickstein
Google Brain
{jaehlee, yasamanb, romann, schsam, jpennin, jaschasd }@google.com
ABSTRACT
It has long been known that a single-layer fully-connected neural network with an
i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit
of infinite network width. This correspondence enables exact Bayesian inference
for infinite width neural networks on regression tasks by means of evaluating the
corresponding GP. Recently, kernel functions which mimic multi-layer random
neural networks have been developed, but only outside of a Bayesian framework.
As such, previous work has not identified that these kernels can be used as co-
variance functions for GPs and allow fully Bayesian prediction with a deep neural
network.
In this work, we derive the exact equivalence between infinitely wide deep net-
works and GPs. We further develop a computationally efficient pipeline to com-
pute the covariance function for these GPs. We then use the resulting GPs to per-
form Bayesian inference for wide deep neural networks on MNIST and CIFAR-
10. We observe that trained neural network accuracy approaches that of the corre-
sponding GP with increasing layer width, and that the GP uncertainty is strongly
correlated with trained network prediction error. We further find that test perfor-
mance increases as finite-width trained networks are made wider and more similar
to a GP, and thus that GP predictions typically outperform those of finite-width
networks. Finally we connect the performance of these GPs to the recent theory
of signal propagation in random neural networks.
1 I NTRODUCTION
Deep neural networks have emerged in recent years as flexible parametric models which can fit
complex patterns in data. As a contrasting approach, Gaussian processes have long served as a
traditional nonparametric tool for modeling. An equivalence between these two approaches was
derived in Neal (1994a), for the case of one layer networks in the limit of infinite width. Neal
(1994a) further suggested that a similar correspondence might hold for deeper networks.
Consider a deep fully-connected neural network with i.i.d. random parameters. Each scalar output
of the network, an affine transformation of the final hidden layer, will be a sum of i.i.d. terms. As we
will discuss in detail below, in the limit of infinite width the Central Limit Theorem1implies that the
function computed by the neural network (NN) isa function drawn from a Gaussian process (GP).
In the case of single hidden-layer networks, the form of the kernel of this GP is well known (Neal
(1994a); Williams (1997)).
This correspondence implies that if we choose the hypothesis space to be the class of infinitely
wide neural networks, an i.i.d. prior over weights and biases can be replaced with a corresponding
GP prior over functions. As noted by (Williams, 1997), this substitution enables exact Bayesian
inference for regression using neural networks. The computation requires building the necessary
covariance matrices over the training and test sets and straightforward linear algebra computations.
∗Both authors contributed equally to this work.
†Work done as a member of the Google AI Residency program (g.co/airesidency).
1Throughout this paper, we assume the conditions on the parameter distributions and nonlinearities are such
that the Central Limit Theorem will hold; for instance, that the weight variance is scaled inversely proportional
to the layer width.
1arXiv:1711.00165v3 [stat.ML] 3 Mar 2018 |
2210.03370.pdf | GNM: A General Navigation Model to Drive Any Robot
Dhruv Shah†β, Ajay Sridhar†β, Arjun Bhorkarβ, Noriaki Hiroseβτ, Sergey Levineβ
𝜏
ot
og GNM
Training
Large Heterogeneous Datasets
Fig. 1: A general navigation model to drive any robot. By training on diverse, heterogeneous datasets, a single “omnipolicy” can
control a variety of robots in challenging environments, including new robots, without any robot-specific data collection.
Abstract — Learning provides a powerful tool for vision-based
navigation, but the capabilities of learning-based policies are
constrained by limited training data. If we could combine
data from all available sources, including multiple kinds of
robots, we could train more powerful navigation models. In
this paper, we study how a general goal-conditioned model
for vision-based navigation can be trained on data obtained
from many distinct but structurally similar robots, and enable
broad generalization across environments and embodiments.
We analyze the necessary design decisions for effective data
sharing across robots, including the use of temporal context
and standardized action spaces, and demonstrate that an
omnipolicy trained from heterogeneous datasets outperforms
policies trained on any single dataset. We curate 60 hours
of navigation trajectories from 6 distinct robots, and deploy
the trained GNM on a range of new robots, including an
underactuated quadrotor. We find that training on diverse
data leads to robustness against degradation in sensing and
actuation. Using a pre-trained navigation model with broad
generalization capabilities can bootstrap applications on novel
robots going forward, and we hope that the GNM represents
a step in that direction. For more information on the datasets,
code, and videos, please check out our project page1.
I. I NTRODUCTION
Machine learning methods have enabled broad general-
ization with real-world applicability in natural language pro-
cessing [1], visual perception [2–4], and other domains [5, 6]
by leveraging Internet-scale data. Such generalization typi-
cally requires learning general patterns from diverse datasets,
which are usually collected once and then reused for various
purposes. Such large-scale models also support the ability
to be adapted for new tasks by reusing the representations
learned from broader, larger, and more general datasets, for
example by or zero-shot transfer [7–9], or fine-tuning on
target-domain data. Although this paradigm has been very
successful, it is difficult to apply in robotics due to the sheer
diversity of environments and platforms across researchers.
Control policies learned end-to-end usually require sepa-
rate data collection for each robotic platform, leading to
“fragmentation” in progress, where every researcher works
with their own robot-specific dataset and policies, making
it infeasible to accumulate large enough datasets. Can we
1sites.google.com/view/drive-any-robot
†Equal Contribution.βUC Berkeley,τToyota Motor North America.overcome this challenge by training models on more general
and reusable cross-robot datasets?
We study this question in the context of visual navigation,
where heterogeneity between robots might include different
camera hardware, viewpoints, dynamics, and more broadly,
embodiments, but where the over-arching navigation objec-
tive looks similar irrespective of these differences. A wheeled
robot, quadruped, or a drone all have the same abstract
objectives: to explore the environment, plan a path to the
goal, and avoid collisions. Leveraging this shared abstraction
across robots and training a general navigational omnipolicy
from large-scale data could enable broad generalization to
novel environments, unseen sensor parameters (e.g., camera
intrinsics and extrinsics), and new robot configurations.
In this paper, we propose to take a step towards this
kind of data sharing by training an embodiment-agnostic
general navigation model (GNM) from an aggregated multi-
robot dataset. The primary contribution of our work is a
framework for training a general omnipolicy from multi-
robot datasets, with empirical evidence that such an om-
nipolicy can effectively learn from heterogeneous datasets
and generalize to novel robot platforms. To facilitate this,
we aggregate a large heterogeneous dataset of navigation
trajectories collected across 6 robots, spanning 60 hours of
interactions in challenging indoor and outdoor environments.
We train the GNM on this dataset and deploy it on 4 distinct
robot platforms, including 2 new robots. We show that a
single learned policy can be used across multiple robots
to perform goal-reaching in challenging indoor and outdoor
environments, outperforming policies trained with any single
dataset. We also report robustness to degradation in camera
parameters, tire damage, and other gradual changes that the
robot may experience over its lifetime.
We have publicly released the trained GNM policy, code
used to train and deploy our models on various popular robot
platforms, as well as the dataset used to train these models
at our project page. We hope that this represents a step to-
wards both general-purpose multi-robot datasets and general-
purpose visual navigational models that can be deployed on a
wide range of robots — similar to how practitioners currently
use pre-trained models in vision and language, such models
could constitute pre-trained backbones for visual navigation.arXiv:2210.03370v2 [cs.RO] 22 May 2023 |
2203.03466.pdf | Tensor Programs V:
Tuning Large Neural Networks via
Zero-Shot Hyperparameter Transfer
Greg Yang∗×Edward J. Hu∗׆Igor Babuschkin◦Szymon Sidor◦Xiaodong Liu×
David Farhi◦Nick Ryder◦Jakub Pachocki◦Weizhu Chen×Jianfeng Gao×
×Microsoft Corporation◦OpenAI
Abstract
Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively
so for neural networks (NNs) with billions of parameters. We show that, in the
recently discovered Maximal Update Parametrization ( µP), many optimal HPs
remain stable even as model size changes. This leads to a new HP tuning paradigm
we callµTransfer : parametrize the target model in µP, tune the HP indirectly on a
smaller model, and zero-shot transfer them to the full-sized model, i.e., without
directly tuning the latter at all. We verify µTransfer on Transformer and ResNet.
For example, 1) by transferring pretraining HPs from a model of 13M parameters,
we outperform published numbers of BERT-large (350M parameters), with a total
tuning cost equivalent to pretraining BERT-large once; 2) by transferring from
40M parameters, we outperform published numbers of the 6.7B GPT-3 model, with
tuning cost only 7% of total pretraining cost. A Pytorch implementation of our
technique can be found at github.com/microsoft/mup and installable via pip
install mup .
1 Introduction
20
18
16
14
12
10
log2LearningRate
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0Training Loss
optimum shiftsStandard Practice
Width
128
256
512
1024
2048
4096
8192
20
18
16
14
12
10
log2LearningRate
optimum stableOur Work
Figure 1: Training loss against learning rate on
Transformers of varying dmodel trained with Adam.
Conventionally and in contrast with our technique,
different widths do not share the same optimal hy-
perparameter; wider networks do not always per-
form better than narrower ones; in fact they under-
perform the same-width networks in our technique
even after tuning learning rate (see dashed line).
See Sections 3 and 4 for experimental setup.Hyperparameter (HP) tuning is critical to deep
learning. Poorly chosen HPs result in subpar
performance and training instability. Many pub-
lished baselines are hard to compare to one
another due to varying degrees of HP tuning.
These issues are exacerbated when training ex-
tremely large deep learning models, since state-
of-the-art networks with billions of parameters
become prohibitively expensive to tune.
Recently, [ 57] showed that different neural net-
work parametrizations induce different infinite-
width limits and proposed the Maximal Update
Parametrization (abbreviated µP)(summarized
in Table 3) that enables “maximal” feature learn-
ing in the limit. Intuitively, it ensures that each
layer is updated on the same order during train-
ingregardless of width .2In contrast, while the
standard parametrization (SP) ensures activations are of unit order at initialization, it actually causes
them to blow up in wide models during training [ 57] essentially due to an imbalance of per-layer
†Work done partly during Microsoft AI Residency Program.
∗Equal contribution. Order is random. Correspondence to {gregyang, edwardhu}@microsoft.com
2i.e., the updates’ effect on activations becomes roughly independent of width in the large width limit.arXiv:2203.03466v2 [cs.LG] 28 Mar 2022 |
1705.01509.pdf | Neural Models for Information Retrieval
Bhaskar Mitra
Microsoft, UCL∗
Cambridge, UK
[email protected] Craswell
Microsoft
Bellevue, USA
[email protected]
Abstract
Neural ranking models for information retrieval (IR) use shallow or deep neural
networks to rank search results in response to a query. Traditional learning to
rank models employ machine learning techniques over hand-crafted IR features.
By contrast, neural models learn representations of language from raw text that
can bridge the gap between query and document vocabulary. Unlike classical IR
models, these new machine learning based approaches are data-hungry, requiring
large scale training data before they can be deployed. This tutorial introduces basic
concepts and intuitions behind neural IR models, and places them in the context of
traditional retrieval models. We begin by introducing fundamental concepts of IR
and different neural and non-neural approaches to learning vector representations
of text. We then review shallow neural IR methods that employ pre-trained neural
term embeddings without learning the IR task end-to-end. We introduce deep
neural networks next, discussing popular deep architectures. Finally, we review the
current DNN models for information retrieval. We conclude with a discussion on
potential future directions for neural IR.
1 Introduction
Since the turn of the decade, there have been dramatic improvements in performance in computer
vision, speech recognition, and machine translation tasks, witnessed in research and in real-world
applications [ 112]. These breakthroughs were largely fuelled by recent advances in neural network
models, usually with multiple hidden layers, known as deep architectures [ 8,49,81,103,112].
Exciting novel applications, such as conversational agents [ 185,203], have also emerged, as well
as game-playing agents with human-level performance [ 147,180]. Work has now begun in the
information retrieval (IR) community to apply these neural methods, leading to the possibility of
advancing the state of the art or even achieving breakthrough performance as in these other fields.
Retrieval of information can take many forms. Users can express their information need in the form of
a text query—by typing on a keyboard, by selecting a query suggestion, or by voice recognition—or
the query can be in the form of an image, or in some cases the need can even be implicit. Retrieval can
involve ranking existing pieces of content, such as documents or short-text answers, or composing
new responses incorporating retrieved information. Both the information need and the retrieved
results may use the same modality (e.g., retrieving text documents in response to keyword queries),
or different ones (e.g., image search using text queries). Retrieval systems may consider user history,
physical location, temporal changes in information, or other context when ranking results. They may
also help users formulate their intent (e.g., via query auto-completion or query suggestion) and/or
extract succinct summaries of results for easier inspection.
Neural IR refers to the application of shallow or deep neural networks to these retrieval tasks. This
tutorial serves as an introduction to neural methods for ranking documents in response to a query, an
∗The author is a part-time PhD student at University College London.
DRAFT. Copyright is held by the author(s). May, 2017.arXiv:1705.01509v1 [cs.IR] 3 May 2017 |
2312.12456.pdf | arXiv:2312.12456v1 [cs.LG] 16 Dec 2023PowerInfer: Fast Large Language Model Serving with a Consum er-grade GPU
Yixin Song, Zeyu Mi∗, Haotong Xie and Haibo Chen
Institute of Parallel and Distributed Systems (IPADS), Sha nghai Jiao Tong University
Abstract
This paper introduces PowerInfer, a high-speed Large Lan-
guage Model (LLM) inference engine on a personal com-
puter (PC) equipped with a single consumer-grade GPU. The
key underlying the design of PowerInfer is exploiting the
high locality inherent in LLM inference, characterized by a
power-law distribution in neuron activation. This distrib ution
indicates that a small subset of neurons, termed hot neurons ,
are consistently activated across inputs, while the majori ty,
cold neurons , vary based on specific inputs. PowerInfer ex-
ploits such an insight to design a GPU-CPU hybrid inference
engine: hot-activated neurons are preloaded onto the GPU
for fast access, while cold-activated neurons are computed on
the CPU, thus significantly reducing GPU memory demands
and CPU-GPU data transfers. PowerInfer further integrates
adaptive predictors and neuron-aware sparse operators, op ti-
mizing the efficiency of neuron activation and computationa l
sparsity. Evaluation shows that PowerInfer attains an aver age
token generation rate of 13.20 tokens/s, with a peak of 29.08
tokens/s, across various LLMs (including OPT-175B) on a
single NVIDIA RTX 4090 GPU, only 18% lower than that
achieved by a top-tier server-grade A100 GPU. This signifi-
cantly outperforms llama.cpp by up to 11.69 ×while retain-
ing model accuracy.
1 Introduction
Generative large language models (LLMs) have garnered
attention for their remarkable capabilities in creative wr it-
ing, advanced code generation, and sophisticated natural l an-
guage processing tasks [ 5,42,49]. These models, widely de-
ployed in data centers equipped with high-end and expensive
server-grade GPUs, have significantly influenced our daily
lives and work practices. Meanwhile, there is an emerging
trend of running LLMs on more accessible local platforms,
particularly personal computers (PCs) with consumer-grad e
GPUs. This evolution is driven by the need for enhanced
data privacy [ 25], model customization [ 22], and reduced in-
ference costs [ 42]. In contrast to data-center deployments,
which prioritize high throughput [ 18,37,47], local deploy-
ments focus on low latency in processing small batches.
Nonetheless, deploying LLMs on consumer-grade GPUs
presents significant challenges due to their substantial me m-
∗Zeyu Mi (yzmizeyu@sjtu .edu.cn) is the corresponding author.ory requirements. LLMs, typically functioning as autoregr es-
sive Transformers, sequentially generate text token-by-t oken,
each needing to access the entire model consisting of hun-
dreds of billions of parameters. Therefore, the inference p ro-
cess is fundamentally constrained by the GPU’s memory ca-
pacity. This limitation is particularly acute in local depl oy-
ments where the processing of individual requests (often ju st
one at a time) [ 6] leaves minimal opportunity for parallel pro-
cessing.
Existing approaches to such memory issues include model
compression and offloading. Compression techniques like
quantization [ 12,46], distillation [ 48], and pruning [ 23] re-
duce the model size. However, even deeply compressed
models remain too large for consumer-grade GPUs. For in-
stance, an OPT-66B model with 4-bit precision demands
approximately 40GB of memory just to load its parame-
ters [ 20], exceeding the capacity of even high-end GPUs like
the NVIDIA RTX 4090. Model offloading, which partitions
the model between GPU and CPU at the Transformer layer
level [ 3,14,37]. State-of-the-art systems like llama.cpp [ 14]
distribute layers between CPU and GPU memories, leverag-
ing both for inference, thus reducing the GPU resources re-
quired. However, this method is hindered by the slow PCIe
interconnect and the CPUs’ limited computational capabili -
ties, resulting in high inference latency.
In this paper, we argue that the key reason for memory
issues in LLM inference is the locality mismatch between
hardware architecture and the characteristics of LLM in-
ference. Current hardware architectures are designed with
a memory hierarchy optimized for data locality. Ideally, a
small, frequently accessed working set should be stored in
the GPU, which offers higher memory bandwidth but limited
capacity. In contrast, larger, less frequently accessed da ta are
better suited for CPUs, which provide more extensive mem-
ory capacity but lower bandwidth. Nevertheless, the vast vo l-
ume of parameters required for each LLM inference iteration
leads to a working set that is too large for a single GPU, thus
impeding efficient locality exploitation.
We have observed that LLM inference inherently exhibits
high locality. Specifically, during each inference iterati on, a
limited number of neurons1are activated, significantly influ-
1This paper defines a neuron as a specific row/column in a weight matrix.
1 |
1802.09568.pdf | arXiv:1802.09568v2 [cs.LG] 2 Mar 2018Shampoo: Preconditioned Stochastic Tensor Optimization
Vineet Gupta6Tomer Koren6Yoram Singer‹
March 5, 2018
Abstract
Preconditioned gradient methods are among the most general and powerful tools in
optimization. However, preconditioning requires storing and manipulating prohibitively
large matrices. We describe and analyze a new structure-awa re preconditioning algorithm,
called Shampoo, for stochastic optimization over tensor sp aces. Shampoo maintains a set of
preconditioning matrices, each of which operates on a singl e dimension, contracting over the
remaining dimensions. We establish convergence guarantee s in the stochastic convex setting,
the proof of which builds upon matrix trace inequalities. Ou r experiments with state-of-
the-art deep learning models show that Shampoo is capable of converging considerably
faster than commonly used optimizers. Although it involves a more complex update rule,
Shampoo’s runtime per step is comparable to that of simple gr adient methods such as SGD,
AdaGrad, and Adam.
1 Introduction
Over the last decade, stochastic first-order optimization m ethods have emerged as the canonical
tools for training large-scale machine learning models. Th ese methods are particularly appealing
due to their wide applicability and their low runtime and mem ory costs.
A potentially more powerful family of algorithms consists o fpreconditioned gradient methods.
Preconditioning methods maintain a matrix, termed a precon ditioner, which is used to transform
(i.e., premultiply) the gradient vector before it is used to take a step. Classic algorithms in this
family include Newton’s method, which employs the local Hes sian as a preconditioner, as well as
a plethora of quasi-Newton methods (e.g., [ 8,15,19]) that can be used whenever second-order
information is unavailable or too expensive to compute. New er additions to this family are
preconditioned online algorithms, most notably AdaGrad [ 6], that use the covariance matrix of
the accumulated gradients to form a preconditioner.
While preconditioned methods often lead to improved conver gence properties, the dimen-
sionality of typical problems in machine learning prohibit s out-of-the-box use of full-matrix
preconditioning. To mitigate this issue, specialized vari ants have been devised in which the full
preconditioner is replaced with a diagonal approximation [ 6,14], a sketched version [ 9,20], or
various estimations thereof [ 7,2,23]. While the diagonal methods are heavily used in practice
thanks to their favorable scaling with the dimension, the ot her approaches are seldom practical
at large scale as one typically requires a fine approximation (or estimate) of the preconditioner
that often demands super-linear memory and computation.
In this paper, we take an alternative approach to preconditi oning and describe an efficient
and practical apparatus that exploits the structure of the p arameter space. Our approach is
motivated by the observation that in numerous machine learn ing applications, the parameter
space entertains a more complex structure than a monolithic vector in Euclidean space. In
6Google Brain. Email: {vineet,tkoren}@google.com
‹Princeton University and Google Brain. Email: [email protected]
1 |
Variational Inference.pdf | Variational Inference
David M. Blei
1 Set up
•As usual, we will assume that x=x1:nare observations and z=z1:mare hidden
variables. We assume additional parameters αthat are fixed.
•Note we are general—the hidden variables might include the “parameters,” e.g., in a
traditional inference setting. (In that case, αare the hyperparameters.)
•We are interested in the posterior distribution ,
p(z|x,α) =p(z,x|α)∫
zp(z,x|α). (1)
•As we saw earlier, the posterior links the data and a model. It is used in all downstream
analyses, such as for the predictive distribution.
•(Note: The problem of computing the posterior is an instance of a more general problem
that variational inference solves.)
2 Motivation
•We can’t compute the posterior for many interesting models.
•Consider the Bayesian mixture of Gaussians,
1. Drawµk∼N(0,τ2) fork= 1...K .
2. Fori= 1...n:
(a) Drawzi∼Mult(π);
1 |
2208.02813.pdf | Towards Understanding Mixture of Experts in Deep
Learning
Zixiang Chen∗and Yihe Deng†and Yue Wu‡and Quanquan Gu§and Yuanzhi Li¶
Abstract
The Mixture-of-Experts (MoE) layer, a sparsely-activated model controlled by a router, has
achieved great success in deep learning. However, the understanding of such architecture remains
elusive. In this paper, we formally study how the MoE layer improves the performance of neural
network learning and why the mixture model will not collapse into a single model. Our empirical
results suggest that the cluster structure of the underlying problem and the non-linearity of the
expert are pivotal to the success of MoE. To further understand this, we consider a challenging
classification problem with intrinsic cluster structures, which is hard to learn using a single
expert. Yet with the MoE layer, by choosing the experts as two-layer nonlinear convolutional
neural networks (CNNs), we show that the problem can be learned successfully. Furthermore,
our theory shows that the router can learn the cluster-center features, which helps divide the
input complex problem into simpler linear classification sub-problems that individual experts
can conquer. To our knowledge, this is the first result towards formally understanding the
mechanism of the MoE layer for deep learning.
1 Introduction
The Mixture-of-Expert (MoE) structure (Jacobs et al., 1991; Jordan and Jacobs, 1994) is a classic
design that substantially scales up the model capacity and only introduces small computation
overhead. In recent years, the MoE layer (Eigen et al., 2013; Shazeer et al., 2017), which is an
extension of the MoE model to deep neural networks, has achieved remarkable success in deep
learning. Generally speaking, an MoE layer contains many experts that share the same network
architecture and are trained by the same algorithm, with a gating (or routing) function that routes
individual inputs to a few experts among all the candidates. Through the sparse gating function,
the router in the MoE layer can route each input to the top- K(K≥2) best experts (Shazeer et al.,
2017), or the single ( K= 1) best expert (Fedus et al., 2021). This routing scheme only costs the
computation of Kexperts for a new input, which enjoys fast inference time.
∗Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
[email protected]
†Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
[email protected]
‡Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail: :
[email protected]
§Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail:
[email protected]
¶Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA; email:
[email protected]
1arXiv:2208.02813v1 [cs.LG] 4 Aug 2022 |
wenzel20a.pdf | How Good is the Bayes Posterior in Deep Neural Networks Really?
Florian Wenzel* 1Kevin Roth* + 2Bastiaan S. Veeling* + 3 1Jakub ´Swi ˛ atkowski4 +Linh Tran5 +
Stephan Mandt6 +Jasper Snoek1Tim Salimans1Rodolphe Jenatton1Sebastian Nowozin7 +
Abstract
During the past five years the Bayesian deep learn-
ing community has developed increasingly accu-
rate and efficient approximate inference proce-
dures that allow for Bayesian inference in deep
neural networks. However, despite this algo-
rithmic progress and the promise of improved
uncertainty quantification and sample efficiency
there are—as of early 2020—no publicized de-
ployments of Bayesian neural networks in indus-
trial practice. In this work we cast doubt on
the current understanding of Bayes posteriors in
popular deep neural networks: we demonstrate
through careful MCMC sampling that the pos-
terior predictive induced by the Bayes posterior
yields systematically worse predictions compared
to simpler methods including point estimates ob-
tained from SGD. Furthermore, we demonstrate
that predictive performance is improved signifi-
cantly through the use of a “cold posterior” that
overcounts evidence. Such cold posteriors sharply
deviate from the Bayesian paradigm but are com-
monly used as heuristic in Bayesian deep learn-
ing papers. We put forward several hypotheses
that could explain cold posteriors and evaluate
the hypotheses through experiments. Our work
questions the goal of accurate posterior approx-
imations in Bayesian deep learning: If the true
Bayes posterior is poor, what is the use of more
accurate approximations? Instead, we argue that
it is timely to focus on understanding the origin
of the improved performance of cold posteriors.
CODE: https://github.com/
google-research/google-research/
tree/master/cold_posterior_bnn
*Equal contribution+Work done while at Google1Google
Research2ETH Zurich3University of Amsterdam4University
of Warsaw5Imperial College London6University of California,
Irvine7Microsoft Research. Correspondence to: Florian Wenzel
<fl[email protected]>.
Proceedings of the 37thInternational Conference on Machine
Learning , Online, PMLR 119, 2020. Copyright 2020 by the au-
thor(s).
10−410−310−210−1100
Temperature T0.880.900.920.94Test accuracy
SG-MCMC
Baseline: SGDFigure 1. The “ cold posterior ” effect: for a ResNet-20 on CIFAR-
10 we can improve the generalization performance significantly by
cooling the posterior with a temperature T≪1, deviating from
the Bayes posterior p(θ|D)∝exp(−U(θ)/T)atT= 1.
1. Introduction
In supervised deep learning we use a training dataset
D={(xi,yi)}i=1,...,n and a probabilistic model p(y|x,θ)
to minimize the regularized cross-entropy objective,
L(θ) :=−1
nn∑
i=1logp(yi|xi,θ) + Ω( θ), (1)
where Ω(θ)is a regularizer over model parameters. We
approximately optimize (1) using variants of stochastic gra-
dient descent (SGD), (Sutskever et al., 2013). Beside being
efficient, the SGD minibatch noise also has generalization
benefits (Masters & Luschi, 2018; Mandt et al., 2017).
1.1. Bayesian Deep Learning
In Bayesian deep learning we do not optimize for a single
likely model but instead want to discover alllikely models.
To this end we approximate the posterior distribution over
model parameters, p(θ|D)∝exp(−U(θ)/T), whereU(θ)
is the posterior energy function ,
U(θ) :=−n∑
i=1logp(yi|xi,θ)−logp(θ), (2)
andTis atemperature . Herep(θ)is aproper prior density
function, for example a Gaussian density. If we scale U(θ)
by1/nand set Ω(θ) =−1
nlogp(θ)we recoverL(θ)in (1).
Therefore exp(−U(θ))simply gives high probability to
models which have low loss L(θ). Givenp(θ|D)wepredict
on a new instance xby averaging over all likely models,
p(y|x,D) =∫
p(y|x,θ)p(θ|D)dθ, (3) |
2301.13856.pdf | Simplex Random Features
Isaac Reid1Krzysztof Choromanski* 2 3Valerii Likhosherstov1Adrian Weller* 1 4
Abstract
We present Simplex Random Features (SimRFs),
a new random feature (RF) mechanism for unbi-
ased approximation of the softmax and Gaussian
kernels by geometrical correlation of random pro-
jection vectors. We prove that SimRFs provide
the smallest possible mean square error (MSE)
on unbiased estimates of these kernels among
the class of weight-independent geometrically-
coupled positive random feature (PRF) mech-
anisms, substantially outperforming the previ-
ously most accurate Orthogonal Random Fea-
tures (ORFs, Yu et al., 2016) at no observable
extra cost. We present a more computationally
expensive SimRFs+ variant, which we prove is
asymptotically optimal in the broader family of
weight-dependent geometrical coupling schemes
(which permit correlations between random vec-
tor directions and norms). In extensive empirical
studies, we show consistent gains provided by
SimRFs in settings including pointwise kernel
estimation, nonparametric classification and scal-
able Transformers (Choromanski et al., 2020).1
1. Introduction
Embedding methods, which project feature vectors into a
new space, are ubiquitous in machine learning. The canoni-
cal example is the Johnson-Lindenstrauss Transform (JLT)
(Johnson, 1984; Dasgupta et al., 2010; Kane & Nelson,
2014; Kar & Karnick, 2012), where a collection of high-
dimensional points is embedded in a much lower dimen-
sional space whilst (approximately) preserving their metric
relationships, e.g. distances and dot-products. Another ap-
plication is found in kernel approximation (Liu et al., 2022;
Yang et al., 2014; Pennington et al., 2015; Li et al., 2010),
*Equal senior co-leads1University of Cambridge2Google
3Columbia University4Alan Turing Institute. Correspondence
to: Isaac Reid <[email protected] >, Krzysztof Choromanski
<[email protected] >.
Proceedings of the 40thInternational Conference on Machine
Learning , Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).
1Code is available at https://github.com/isaac-reid/
simplex random features.where the nonlinear similarity measure (kernel) in the origi-
nal space is translated to a linear kernel in the latent space.
For example, a kernel K(·,·) :Rd×Rd→Rcan be approx-
imated using so-called random features (RFs): randomised
nonlinear transformations ϕ(·) :Rd→Rd′constructed
such that
K(x,y) =E[bK(x,y)],wherebK(x,y)def=ϕ(x)⊤ϕ(y).
(1)
Provided Kis stationary, meaning K(x,y) =K(x−y),
we can use Bochner’s theorem to write
K(x−y) =Z
Rdp(w)eiw⊤(x−y)ddw, (2)
where p(w)is the Fourier transform of K. IfKis posi-
tive semidefinite, p(w)is non-negative so we can treat it
as a probability density. This invites Monte Carlo (MC)
sampling, yielding Random Fourier Features (RFFs) of the
following form, where vectors wiare sampled from p(w),
mis their number and ⊙denotes concatenation (Rahimi &
Recht, 2007; 2008):
ϕRFF(z)def=r
1
m(⊙m
i=1[sin(w⊤
iz),cos(w⊤
iz)])⊤.(3)
Furthermore, if Kis a Gaussian kernel, defined by
Kgauss(x,y)def= exp( −∥x−y∥2
2
2), (4)
random vectors wiare sampled from the multivariate Gaus-
sian distribution N(0,Id). Another kernel, of key interest
in Transformer architectures (Vaswani et al., 2017; Choro-
manski et al., 2020), is the so-called softmax kernel :
Ksmax(x,y)def= exp( x⊤y). (5)
Since Kgauss(x,y) =Ksmax(x,y) exp(−x2
2−y2
2), RF
mechanisms for the Gaussian kernel can be readily con-
verted into the corresponding mechanism for softmax and
vice versa (Likhosherstov et al., 2022). Our results will
hence apply to both settings. For brevity, we will mostly
refer to Kgauss .
However, as noted in (Choromanski et al., 2020), RFFs lead
to unstable training of implicit linear-attention Transformers.
1arXiv:2301.13856v2 [stat.ML] 7 Oct 2023 |
2024.02.06.579080v1.full.pdf | Direct Coupling Analysis and the Attention Mechanism 1
Francesco Caredda1†and Andrea Pagnani1,2,3†
2
1DISAT, Politecnico di Torino, Corso Duca degli Abruzzi, 24, I-10129, Torino, Italy 3
2Italian Institute for Genomic Medicine, IRCCS Candiolo, SP-142, I-10060, 4
Candiolo, Italy 5
3INFN, Sezione di Torino, Torino, Via Pietro Giuria, 1 10125 Torino Italy 6
Abstract 7
Proteins serve as the foundation for nearly all biological functions within cells, encompassing 8
roles in transport, signaling, enzymatic activity, and more. Their functionalities hinge signifi- 9
cantly on their intricate three-dimensional structures, often posing challenges in terms of diffi- 10
culty, time, and expense for accurate determination. The introduction of AlphaFold 2 marked 11
a groundbreaking solution to the enduring challenge of predicting a protein’s tertiary structure 12
from its amino acid sequence. However, the inherent complexity of AlphaFold’s architecture 13
presents obstacles in deciphering its learning process and understanding the decision-making 14
that ultimately shapes the protein’s final structure. 15
In this study, we introduce a shallow, unsupervised model designed to understand the self- 16
attention layer within the Evoformer block of AlphaFold. We establish a method based on Direct 17
Coupling Analysis (DCA), wherein the interaction tensor undergoes decomposition, leveraging 18
the same structure employed in Transformer architectures. The model’s parameters, notably 19
fewer than those in standard DCA, are interpretable through an examination of the resulting 20
attention matrices. These matrices enable the extraction of contact information, subsequently 21
utilized for constructing the contact map of a protein family. Additionally, the self-attention 22
decomposition in the DCA Hamiltonian form adopted here facilitates the definition of multi- 23
family learning architecture, enabling the inference of parameter sets shared across diverse 24
protein families. Finally, an autoregressive generative version of the model is implemented, 25
capable of efficiently generating new proteins in silico. This generative model reproduces the 26
summary statistics of the original protein family while concurrently inferring direct contacts in 27
the tertiary structure of the protein. The effectiveness of our Attention-Based DCA architecture 28
is evaluated using Multiple Sequence Alignments (MSAs) of varying lengths and depths, with 29
structural data sourced from the Pfam database. 30
1 Introduction 31
Proteins constitute a diverse category of biological compounds constructed from a set of 20 amino 32
acids. Within an organism, they serve various functions, including structural support, mobility, 33
1. CC-BY 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted February 8, 2024. ; https://doi.org/10.1101/2024.02.06.579080doi: bioRxiv preprint |
2307.08691.pdf | FlashAttention-2 :
Faster Attention with Better Parallelism and Work Partitioning
Tri Dao1,2
1Department of Computer Science, Princeton University
2Department of Computer Science, Stanford University
[email protected]
July 18, 2023
Abstract
Scaling Transformers to longer sequence lengths has been a major problem in the last several years,
promising to improve performance in language modeling and high-resolution image understanding, as
well as to unlock new applications in code, audio, and video generation. The attention layer is the
main bottleneck in scaling to longer sequences, as its runtime and memory increase quadratically in
the sequence length. FlashAttention [5] exploits the asymmetric GPU memory hierarchy to bring
significant memory saving (linear instead of quadratic) and runtime speedup (2-4 ×compared to optimized
baselines), with no approximation. However, FlashAttention is still not nearly as fast as optimized
matrix-multiply (GEMM) operations, reaching only 25-40% of the theoretical maximum FLOPs/s. We
observe that the inefficiency is due to suboptimal work partitioning between different thread blocks and
warps on the GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We propose
FlashAttention-2 , with better work partitioning to address these issues. In particular, we (1) tweak
the algorithm to reduce the number of non-matmul FLOPs (2) parallelize the attention computation, even
for a single head, across different thread blocks to increase occupancy, and (3) within each thread block,
distribute the work between warps to reduce communication through shared memory. These yield around
2×speedup compared to FlashAttention , reaching 50-73% of the theoretical maximum FLOPs/s on
A100 and getting close to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed of up to 225 TFLOPs/s
per A100 GPU (72% model FLOPs utilization).1
1 Introduction
Scaling up the context length of Transformers [ 18] is a challenge, since the attention layer at their heart
has runtime and memory requirements quadratic in the input sequence length. Ideally, we would like to go
beyond the standard 2k sequence length limit to train models to understand books, high resolution images,
and long-form videos. Just within the last year, there have been several language models with much longer
context than before: GPT-4 [ 12] with context length 32k, MosaicML’s MPT with context length 65k, and
Anthropic’s Claude with context length 100k. Emerging use cases such as long document querying and story
writing have demonstrated a need for models with such long context.
To reduce the computational requirement of attention on such long context, there have been numerous
methods proposed to approximate attention [ 2,3,4,8,9,14,19,20]. Though these methods have seen
some use cases, as far as we know, most large-scale training runs still use standard attention. Motivated by
this, Dao et al. [5]proposed to reorder the attention computation and leverages classical techniques (tiling,
recomputation) to significantly speed it up and reduce memory usage from quadratic to linear in sequence
length. This yields 2-4 ×wall-clock time speedup over optimized baselines, up to 10-20 ×memory saving,
1FlashAttention-2 is available at https://github.com/Dao-AILab/flash-attention
1arXiv:2307.08691v1 [cs.LG] 17 Jul 2023 |
supplementary-gpsa.pdf | Supplementary Information for:
Generative Capacity of Probabilistic Protein Sequence Models
Francisco McGee Sandro Hauri Quentin Novinger Slobodan Vucetic Ronald M. Levy
Vincenzo Carnevale Allan Haldane
Supplementary Note 1 - sVAE implementation
The standard variational autoencoder (sVAE) is a deep, symmetrical, and undercomplete autoencoder neural
network composed of a separate encoder qφ(Z|S)and decoder pθ(S|Z)1, which map input sequences Sinto regions
of a low-dimensional latent space Zand back (see Fig. S1). It is a probabilistic model, and in our “vanilla”2
implementation we assume sequences will be distributed according to a unit normal distribution in latent space,
p(Z) =N[0,1](Z)3. Training of a VAE can be understood as maximization of (the logarithm of) the dataset likelihood
L=∏Spθ(S) =∑S∫pθ(S|Z)p(Z)dZwith the addition of a Kullback-Leibler regularization term DKL[qφ(Z|S),pθ(Z|S)],
where pθ(Z|S)is the posterior of the decoder, which allows use of the fitted encoder qφ(Z|S)to perform efficient
estimation of the likelihood and its gradient by Monte-Carlo sampling, for appropriate encoder models. The sVAE
architecture is built on the same basic VAE architecture of “EVOVAE”4, which itself appears to be built on the VAE
implementation provided by developers for the Keras library5, and this same VAE architecture is used for each
protein presented in this work.
Similarly to EVOVAE, sVAE’s hyperparameters were tuned using grid search. sVAE is composed of 3symmetrical
ELU-activated layers in both the encoder and decoder, each layer with 250dense (fully-connected) nodes. The
encoder and decoder are connected by a latent layer of lnodes, and we use l=7in the main text. We provide
further justification for the selection of l=7elsewhere in the Supplementary Note 3. sVAE’s input layer accepts
one-hot encoded sequences, the output layer is sigmoid-activated, and its node output values can be interpreted as
a Bernoulli distribution of the same dimensions as a one-hot encoded sequence. The first layer of the encoder and
the middle layer of the decoder have dropout regularization applied with 30% dropout rate, and the middle layer of
the encoder uses batch normalization4,6,7.
In all inferences, we hold out 10% of the training sequences as a validation dataset, and perform maximum
likelihood optimization using the Keras Adam stochastic gradient optimizer on the remaining 90%8, using mini-batch
gradient descent with a batch size of 200. After each training epoch we evaluate the loss function for the training
and validation data subsets separately. We have tested using early-stopping regularization to stop inference once
the validation loss has not decreased for three epochs in a row, as in previous implementations, but this led to some
variability in the model depending on when the early stopping criterion was reached. To avoid this variability, and to
make different models more directly comparable, we instead fixed the number of epochs to 32for all models, since
in the early stopping tests this led to near minimum training loss and validation loss, and did not lead to significant
overfitting as would be apparent from an increase in the validation loss.
sVAE was implemented using Keras, building on previous implementations4,5, however with a modification of
the loss function relative to both of these, to remove a scaling factor of Lqon the reconstruction loss, which is
sometimes used to avoid issues with local minima as described further below. This prefactor leads to a non-unit
variance of the latent space distribution of the dataset sequences, violating our definition that the latent space
distribution should be normal with unit variance, p(Z) =N[0,1](Z). In the next section we show that after removing
the prefactor the latent space distribution is approximately a unit normal, which more closely follows the original
VAE conception3,9. Our implementation is available at https://github.com/ahaldane/MSA_VAE .
To generate a sequence from the model we generate a random sample in latent space from the latent distribution
N[0,1], and pass this value to the decoder to obtain a Bernoulli distribution, from which we sample once.
1/20 |
2010.02502.pdf | Published as a conference paper at ICLR 2021
DENOISING DIFFUSION IMPLICIT MODELS
Jiaming Song, Chenlin Meng & Stefano Ermon
Stanford University
{tsong,chenlin,ermon }@cs.stanford.edu
ABSTRACT
Denoising diffusion probabilistic models (DDPMs) have achieved high qual-
ity image generation without adversarial training, yet they require simulating a
Markov chain for many steps in order to produce a sample. To accelerate sam-
pling, we present denoising diffusion implicit models (DDIMs), a more efficient
class of iterative implicit probabilistic models with the same training procedure as
DDPMs. In DDPMs, the generative process is defined as the reverse of a particular
Markovian diffusion process. We generalize DDPMs via a class of non-Markovian
diffusion processes that lead to the same training objective. These non-Markovian
processes can correspond to generative processes that are deterministic, giving rise
to implicit models that produce high quality samples much faster. We empirically
demonstrate that DDIMs can produce high quality samples 10×to50×faster in
terms of wall-clock time compared to DDPMs, allow us to trade off computation
for sample quality, perform semantically meaningful image interpolation directly
in the latent space, and reconstruct observations with very low error.
1 I NTRODUCTION
Deep generative models have demonstrated the ability to produce high quality samples in many
domains (Karras et al., 2020; van den Oord et al., 2016a). In terms of image generation, genera-
tive adversarial networks (GANs, Goodfellow et al. (2014)) currently exhibits higher sample quality
than likelihood-based methods such as variational autoencoders (Kingma & Welling, 2013), autore-
gressive models (van den Oord et al., 2016b) and normalizing flows (Rezende & Mohamed, 2015;
Dinh et al., 2016). However, GANs require very specific choices in optimization and architectures
in order to stabilize training (Arjovsky et al., 2017; Gulrajani et al., 2017; Karras et al., 2018; Brock
et al., 2018), and could fail to cover modes of the data distribution (Zhao et al., 2018).
Recent works on iterative generative models (Bengio et al., 2014), such as denoising diffusion prob-
abilistic models (DDPM, Ho et al. (2020)) and noise conditional score networks (NCSN, Song &
Ermon (2019)) have demonstrated the ability to produce samples comparable to that of GANs, with-
out having to perform adversarial training. To achieve this, many denoising autoencoding models
are trained to denoise samples corrupted by various levels of Gaussian noise. Samples are then
produced by a Markov chain which, starting from white noise, progressively denoises it into an im-
age. This generative Markov Chain process is either based on Langevin dynamics (Song & Ermon,
2019) or obtained by reversing a forward diffusion process that progressively turns an image into
noise (Sohl-Dickstein et al., 2015).
A critical drawback of these models is that they require many iterations to produce a high quality
sample. For DDPMs, this is because that the generative process (from noise to data) approximates
the reverse of the forward diffusion process (from data to noise), which could have thousands of
steps; iterating over all the steps is required to produce a single sample, which is much slower
compared to GANs, which only needs one pass through a network. For example, it takes around 20
hours to sample 50k images of size 32×32from a DDPM, but less than a minute to do so from
a GAN on a Nvidia 2080 Ti GPU. This becomes more problematic for larger images as sampling
50k images of size 256×256could take nearly 1000 hours on the same GPU.
To close this efficiency gap between DDPMs and GANs, we present denoising diffusion implicit
models (DDIMs). DDIMs are implicit probabilistic models (Mohamed & Lakshminarayanan, 2016)
and are closely related to DDPMs, in the sense that they are trained with the same objective function.
1arXiv:2010.02502v4 [cs.LG] 5 Oct 2022 |
1901.09321.pdf | Published as a conference paper at ICLR 2019
FIXUP INITIALIZATION :
RESIDUAL LEARNING WITHOUT NORMALIZATION
Hongyi Zhang∗
MIT
[email protected] N. Dauphin†
Google Brain
[email protected] Ma‡
Stanford University
[email protected]
ABSTRACT
Normalization layers are a staple in state-of-the-art deep neural network archi-
tectures. They are widely believed to stabilize training, enable higher learning
rate, accelerate convergence and improve generalization, though the reason for
their effectiveness is still an active research topic. In this work, we challenge the
commonly-held beliefs by showing that none of the perceived benefits is unique
to normalization. Specifically, we propose fixed-update initialization (Fixup), an
initialization motivated by solving the exploding and vanishing gradient problem
at the beginning of training via properly rescaling a standard initialization. We
find training residual networks with Fixup to be as stable as training with nor-
malization — even for networks with 10,000 layers. Furthermore, with proper
regularization, Fixup enables residual networks without normalization to achieve
state-of-the-art performance in image classification and machine translation.
1 I NTRODUCTION
Artificial intelligence applications have witnessed major advances in recent years. At the core of
this revolution is the development of novel neural network models and their training techniques. For
example, since the landmark work of He et al. (2016), most of the state-of-the-art image recognition
systems are built upon a deep stack of network blocks consisting of convolutional layers and additive
skip connections, with some normalization mechanism (e.g., batch normalization (Ioffe & Szegedy,
2015)) to facilitate training and generalization. Besides image classification, various normalization
techniques (Ulyanov et al., 2016; Ba et al., 2016; Salimans & Kingma, 2016; Wu & He, 2018) have
been found essential to achieving good performance on other tasks, such as machine translation
(Vaswani et al., 2017) and generative modeling (Zhu et al., 2017). They are widely believed to have
multiple benefits for training very deep neural networks, including stabilizing learning, enabling
higher learning rate, accelerating convergence, and improving generalization.
Despite the enormous empirical success of training deep networks with normalization, and recent
progress on understanding the working of batch normalization (Santurkar et al., 2018), there is
currently no general consensus on why these normalization techniques help training residual neural
networks. Intrigued by this topic, in this work we study
(i)without normalization, can a deep residual network be trained reliably? (And if so,)
(ii)without normalization, can a deep residual network be trained with the same learning rate,
converge at the same speed, and generalize equally well (or even better)?
Perhaps surprisingly, we find the answers to both questions are Yes. In particular, we show:
•Why normalization helps training. We derive a lower bound for the gradient norm of a residual
network at initialization, which explains why with standard initializations, normalization tech-
niques are essential for training deep residual networks at maximal learning rate. (Section 2)
∗Work done at Facebook. Equal contribution.
†Work done at Facebook. Equal contribution.
‡Work done at Facebook.
1arXiv:1901.09321v2 [cs.LG] 12 Mar 2019 |
2402.03300.pdf | DeepSeekMath: Pushing the Limits of Mathematical
Reasoning in Open Language Models
Zhihong Shao1,2∗†, Peiyi Wang1,3∗†, Qihao Zhu1,3∗†, Runxin Xu1, Junxiao Song1
Mingchuan Zhang1, Y.K. Li1, Y. Wu1, Daya Guo1∗
1DeepSeek-AI,2Tsinghua University,3Peking University
{zhihongshao,wangpeiyi,zhuqh,guoday}@deepseek.com
https://github.com/deepseek-ai/DeepSeek-Math
Abstract
Mathematical reasoning poses a significant challenge for language models due to its complex
and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-
training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common
Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an
impressive score of 51.7% on the competition-level MATH benchmark without relying on
external toolkits and voting techniques, approaching the performance level of Gemini-Ultra
and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH.
The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First,
we harness the significant potential of publicly available web data through a meticulously
engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization
(GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning
abilities while concurrently optimizing the memory usage of PPO.
Figure 1|Top1 accuracy of open-source models on the competition-level MATH benchmark
(Hendrycks et al., 2021) without the use of external toolkits and voting techniques.
∗Core contributors.
†Work done during internship at DeepSeek-AI.arXiv:2402.03300v1 [cs.CL] 5 Feb 2024 |
2111.02080.pdf | An Explanation of In-context Learning as Implicit
Bayesian Inference
Sang Michael Xie
Stanford University
[email protected] Raghunathan
Stanford University
[email protected]
Percy Liang
Stanford University
[email protected] Ma
Stanford University
[email protected]
Abstract
Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, where
the model learns to do a downstream task simply by conditioning on a prompt consisting of input-output
examples. The LM learns from these examples without being explicitly pretrained to learn . Thus, it is unclear what
enables in-context learning. In this paper, we study how in-context learning can emerge when pretraining
documents have long-range coherence. Here, the LM must infer a latent document-level concept to generate
coherent next tokens during pretraining. At test time, in-context learning occurs when the LM also infers
a shared latent concept between examples in a prompt. We prove when this occurs despite a distribution
mismatch between prompts and pretraining data in a setting where the pretraining distribution is a mixture of
HMMs. In contrast to messy large-scale datasets used to train LMs capable of in-context learning, we generate
a small-scale synthetic dataset (GINC) where Transformers and LSTMs both exhibit in-context learning1.
Beyond the theory, experiments on GINC exhibit large-scale real-world phenomena including improved
in-context performance with model scaling (despite the same pretraining loss), sensitivity to example order,
and instances where zero-shot is better than few-shot in-context learning.
1 Introduction
Large language models (LMs) such as GPT-3 (Brown et al., 2020, Lieber et al., 2021, Radford et al.,
2019, Wang and Komatsuzaki, 2021) are pretrained on massive text corpora to predict the next word
given previous words. They demonstrate the surprising ability to do in-context learning , where an
LM “learns” to do a task simply by conditioning on a prompt containing input-output pairs, achiev-
ing SOTA results on LAMBADA (Paperno et al., 2016) and TriviaQA (Joshi et al., 2017) tasks (18%
and 3% over previous SOTA (Brown et al., 2020)). For example, consider the task of predicting
nationalities from names. A prompt (Figure 1) is constructed by concatenating independent “train-
ing” examples (e.g., “Albert Einstein was German”) followed by a “test example” (“Marie Curie
was”). Conditioning on this prompt, GPT-3 places the largest probability on the correct output
p(“Polish”|“Albert Einstein was German \n Mahatma Gandhi was Indian \n Marie Curie was” )
1The code, data, and experiments are located on GitHub and CodaLab.
1arXiv:2111.02080v6 [cs.CL] 21 Jul 2022 |
2404.16014v1.pdf | 2024-4-25
Improving Dictionary Learning with Gated
Sparse Autoencoders
Senthooran Rajamanoharan*, Arthur Conmy*, Lewis Smith, Tom Lieberum†, Vikrant Varma†, János Kramár,
Rohin Shah and Neel Nanda
*: Joint contribution.†: Core infrastructure contributor.
Recent work has found that sparse autoencoders (SAEs) are an effective technique for unsupervised
discovery of interpretable features in language models’ (LMs) activations, by finding sparse, linear
reconstructions of LM activations. We introduce the Gated Sparse Autoencoder (Gated SAE), which
achieves a Pareto improvement over training with prevailing methods. In SAEs, the L1 penalty used to
encourage sparsity introduces many undesirable biases, such as shrinkage – systematic underestimation
of feature activations. The key insight of Gated SAEs is to separate the functionality of (a) determining
which directions to use and (b) estimating the magnitudes of those directions: this enables us to apply
the L1 penalty only to the former, limiting the scope of undesirable side effects. Through training
SAEs on LMs of up to 7B parameters we find that, in typical hyper-parameter ranges, Gated SAEs solve
shrinkage, are similarly interpretable, and require half as many firing features to achieve comparable
reconstruction fidelity.
1. Introduction
Mechanistic interpretability research aims to explain how neural networks produce outputs in terms
of the learned algorithms executed during a forward pass (Olah, 2022; Olah et al., 2020). Much work
makes use of the fact that many concept representations appear to be linear (Elhage et al., 2021;
Gurnee et al., 2023; Olah et al., 2020; Park et al., 2023). However, finding the set of all interpretable
directions is a highly non-trivial problem. Classic approaches, like interpreting neurons (i.e. directions
in the standard basis) are insufficient, as many are polysemantic and tend to activate for a range
of different seemingly unrelated concepts (Bolukbasi et al., 2021; Elhage et al., 2022a,b, Empirical
Phenomena).
Thesuperpositionhypothesis (Elhageetal.,2022b, DefinitionsandMotivation)positsamechanistic
explanation for these observations: in an intermediate representation of dimension 𝑛, a model will
encode𝑀≫𝑛concepts as linear directions, where the set of concepts and their directions is fixed
across all inputs, but on a given input only a sparse subset of concepts are active, ensuring that
there is not much simultaneous interference (Gurnee et al., 2023, Appendix A) between these
(non-orthogonal) concepts. Motivated by the superposition hypothesis, Bricken et al. (2023) and
Cunningham et al. (2023) recently used sparse autoencoders (SAEs; Ng (2011)) to find sparse
decompositions of model activations in terms of an overcomplete basis, or dictionary (Mallat and
Zhang, 1993).1
Although SAEs show promise in this regard, the L1 penalty used in the prevailing training method
to encourage sparsity also introduces biases that harm the accuracy of SAE reconstructions, as the
loss can be decreased by trading-off some reconstruction accuracy for lower L1. We refer to this
1Although motivated by the superposition hypothesis, the utility of line of research is not contingent on this hypothesis
being true. If a faithful, sparse and interpretable decomposition can be found, we expect this to be a useful basis in its own
right for downstream interpretability tasks, such as understanding or intervening on a model’s representations and circuits,
even if some fraction of the model’s computation is e.g. represented non-linearly and not captured.
Corresponding authors: [email protected] and [email protected]
©2024 Google DeepMind. All rights reserved
arXiv:2404.16014v1 [cs.LG] 24 Apr 2024 |
1811.07871.pdf | Scalable agent alignment via reward modeling:
a research direction
Jan Leike
DeepMindDavid Krueger∗
DeepMind
MilaTom Everitt
DeepMindMiljan Martic
DeepMindVishal Maini
DeepMindShane Legg
DeepMind
Abstract
One obstacle to applying reinforcement learning algorithms to real-world problems
is the lack of suitable reward functions. Designing such reward functions is difficult
in part because the user only has an implicit understanding of the task objective.
This gives rise to the agent alignment problem : how do we create agents that behave
in accordance with the user’s intentions? We outline a high-level research direction
to solve the agent alignment problem centered around reward modeling : learning a
reward function from interaction with the user and optimizing the learned reward
function with reinforcement learning. We discuss the key challenges we expect
to face when scaling reward modeling to complex and general domains, concrete
approaches to mitigate these challenges, and ways to establish trust in the resulting
agents.
1 Introduction
Games are a useful benchmark for research because progress is easily measurable. Atari games
come with a score function that captures how well the agent is playing the game; board games or
competitive multiplayer games such as Dota 2 and Starcraft II have a clear winner or loser at the end
of the game. This helps us determine empirically which algorithmic and architectural improvements
work best.
However, the ultimate goal of machine learning (ML) research is to go beyond games and improve
human lives. To achieve this we need ML to assist us in real-world domains, ranging from simple
tasks like ordering food or answering emails to complex tasks like software engineering or running a
business. Yet performance on these and other real-world tasks is not easily measurable, since they do
not come readily equipped with a reward function. Instead, the objective of the task is only indirectly
available through the intentions of the human user.
This requires walking a fine line. On the one hand, we want ML to generate creative and brilliant
solutions like AlphaGo’s Move 37 (Metz, 2016)—a move that no human would have recommended,
yet it completely turned the game in AlphaGo’s favor. On the other hand, we want to avoid degenerate
solutions that lead to undesired behavior like exploiting a bug in the environment simulator (Clark &
Amodei, 2016; Lehman et al., 2018). In order to differentiate between these two outcomes, our agent
needs to understand its user’s intentions , and robustly achieve these intentions with its behavior. We
frame this as the agent alignment problem :
How can we create agents that behave in accordance with the user’s intentions?
With this paper we outline a research direction to solve the agent alignment problem. We build
on taxonomies and problem definitions from many authors before us, highlighting tractable and
neglected problems in the field of AI safety (Russell et al., 2015; Soares, 2015; Amodei et al., 2016;
∗Work done during an internship at DeepMind.arXiv:1811.07871v1 [cs.LG] 19 Nov 2018 |
1803.03635.pdf | Published as a conference paper at ICLR 2019
THELOTTERY TICKET HYPOTHESIS :
FINDING SPARSE , TRAINABLE NEURAL NETWORKS
Jonathan Frankle
MIT CSAIL
[email protected] Carbin
MIT CSAIL
[email protected]
ABSTRACT
Neural network pruning techniques can reduce the parameter counts of trained net-
works by over 90%, decreasing storage requirements and improving computational
performance of inference without compromising accuracy. However, contemporary
experience is that the sparse architectures produced by pruning are difficult to train
from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose
initializations made them capable of training effectively. Based on these results, we
articulate the lottery ticket hypothesis : dense, randomly-initialized, feed-forward
networks contain subnetworks ( winning tickets ) that—when trained in isolation—
reach test accuracy comparable to the original network in a similar number of
iterations. The winning tickets we find have won the initialization lottery: their
connections have initial weights that make training particularly effective.
We present an algorithm to identify winning tickets and a series of experiments
that support the lottery ticket hypothesis and the importance of these fortuitous
initializations. We consistently find winning tickets that are less than 10-20% of
the size of several fully-connected and convolutional feed-forward architectures
for MNIST and CIFAR10. Above this size, the winning tickets that we find learn
faster than the original network and reach higher test accuracy.
1 I NTRODUCTION
Techniques for eliminating unnecessary weights from neural networks ( pruning ) (LeCun et al., 1990;
Hassibi & Stork, 1993; Han et al., 2015; Li et al., 2016) can reduce parameter-counts by more than
90% without harming accuracy. Doing so decreases the size (Han et al., 2015; Hinton et al., 2015)
or energy consumption (Yang et al., 2017; Molchanov et al., 2016; Luo et al., 2017) of the trained
networks, making inference more efficient. However, if a network can be reduced in size, why do we
not train this smaller architecture instead in the interest of making training more efficient as well?
Contemporary experience is that the architectures uncovered by pruning are harder to train from the
start, reaching lower accuracy than the original networks.1
Consider an example. In Figure 1, we randomly sample and train subnetworks from a fully-connected
network for MNIST and convolutional networks for CIFAR10. Random sampling models the effect
of the unstructured pruning used by LeCun et al. (1990) and Han et al. (2015). Across various levels
of sparsity, dashed lines trace the iteration of minimum validation loss2and the test accuracy at that
iteration. The sparser the network, the slower the learning and the lower the eventual test accuracy.
1“Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate
the difficulty of training a network with a small capacity.” (Li et al., 2016) “During retraining, it is better to retain
the weights from the initial training phase for the connections that survived pruning than it is to re-initialize the
pruned layers...gradient descent is able to find a good solution when the network is initially trained, but not after
re-initializing some layers and retraining them.” (Han et al., 2015)
2As a proxy for the speed at which a network learns, we use the iteration at which an early-stopping criterion
would end training. The particular early-stopping criterion we employ throughout this paper is the iteration of
minimum validation loss during training. See Appendix C for more details on this choice.
1arXiv:1803.03635v5 [cs.LG] 4 Mar 2019 |
2002.10957v2.pdf | MINILM: Deep Self-Attention Distillation for
Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang Furu Wei Li Dong Hangbo Bao Nan Yang Ming Zhou
Microsoft Research
{wenwan,fuwei,lidong1,t-habao,nanya,mingzhou}@microsoft.com
Abstract
Pre-trained language models (e.g., BERT (Devlin
et al., 2018) and its variants) have achieved re-
markable success in varieties of NLP tasks. How-
ever, these models usually consist of hundreds of
millions of parameters which brings challenges
for fine-tuning and online serving in real-life ap-
plications due to latency and capacity constraints.
In this work, we present a simple and effective ap-
proach to compress large Transformer (Vaswani
et al., 2017) based pre-trained models, termed
as deep self-attention distillation. The small
model (student) is trained by deeply mimicking
the self-attention module, which plays a vital
role in Transformer networks, of the large model
(teacher). Specifically, we propose distilling the
self-attention module of the last Transformer layer
of the teacher, which is effective and flexible for
the student. Furthermore, we introduce the scaled
dot-product between values in the self-attention
module as the new deep self-attention knowledge,
in addition to the attention distributions (i.e., the
scaled dot-product of queries and keys) that have
been used in existing works. Moreover, we show
that introducing a teacher assistant (Mirzadeh
et al., 2019) also helps the distillation of large
pre-trained Transformer models. Experimental
results demonstrate that our monolingual model1
outperforms state-of-the-art baselines in different
parameter size of student models. In particular, it
retains more than 99% accuracy on SQuAD 2.0
and several GLUE benchmark tasks using 50%
of the Transformer parameters and computations
of the teacher model. We also obtain competitive
results in applying deep self-attention distillation
to multilingual pre-trained models.
Correspondence to: Furu Wei <[email protected]>.
1The code and models are publicly available at https://
aka.ms/minilm .1. Introduction
Language model (LM) pre-training has achieved remarkable
success for various natural language processing tasks (Pe-
ters et al., 2018; Howard & Ruder, 2018; Radford et al.,
2018; Devlin et al., 2018; Dong et al., 2019; Yang et al.,
2019; Joshi et al., 2019; Liu et al., 2019). The pre-trained
language models, such as BERT (Devlin et al., 2018) and its
variants, learn contextualized text representations by predict-
ing words given their context using large scale text corpora,
and can be fine-tuned with additional task-specific layers to
adapt to downstream tasks. However, these models usually
contain hundreds of millions of parameters which brings
challenges for fine-tuning and online serving in real-life
applications for latency and capacity constraints.
Knowledge distillation (Hinton et al., 2015; Romero et al.,
2015) (KD) has been proven to be a promising way to com-
press a large model (called the teacher model) into a small
model (called the student model), which uses much fewer
parameters and computations while achieving competitive
results on downstream tasks. There have been some works
that task-specifically distill pre-trained large LMs into small
models (Tang et al., 2019; Turc et al., 2019b; Sun et al.,
2019a; Aguilar et al., 2019). They first fine-tune the pre-
trained LMs on specific tasks and then perform distillation.
Task-specific distillation is effective, but fine-tuning large
pre-trained models is still costly, especially for large datasets.
Different from task-specific distillation, task-agnostic LM
distillation mimics the behavior of the original pre-trained
LMs and the student model can be directly fine-tuned on
downstream tasks (Tsai et al., 2019; Sanh et al., 2019; Jiao
et al., 2019; Sun et al., 2019b).
Previous works use soft target probabilities for masked lan-
guage modeling predictions or intermediate representations
of the teacher LM to guide the training of the task-agnostic
student. DistilBERT (Sanh et al., 2019) employs a soft-label
distillation loss and a cosine embedding loss, and initializes
the student from the teacher by taking one layer out of two.
But each Transformer layer of the student is required to
have the same architecture as its teacher. TinyBERT (Jiao
et al., 2019) and MOBILE BERT (Sun et al., 2019b) utilizearXiv:2002.10957v2 [cs.CL] 6 Apr 2020 |
reka-vibe-eval.pdf | Vibe-Eval: A hard evaluation suite for measuring progress of
multimodal language models
Piotr Padlewski∗Max Bain∗Matthew Henderson Zhongkai Zhu
Nishant Relan Hai Pham Donovan Ong Kaloyan Aleksiev Aitor Ormazabal
Samuel Phua Ethan Yeo Eugenie Lamprecht Qi Liu Yuqi Wang Eric Chen Deyu Fu Lei Li
Che Zheng Cyprien de Masson d’Autume Dani Yogatama Mikel Artetxe Yi Tay
Reka
What does this image say?
There is a message hidden in the image that says " GOOD VIBES ONLY ". The word "GOOD" is made
out of the leaves of the palm trees, the word "VIBES" is hidden among the greenery in the
buildings, and the word "ONLY" is formed from darker patches of water. Golden Reference
The image features the word "ONLY" written in large letters on the sandy beach. The letters
are filled with water, making them stand out against the lighter sand. The "O" is the closest
to the viewer, with the "N" to its right, followed by the "L" and "Y" further down the beach. Reka Core
The assistant response only identified the word "only" in the sand, while the ground truth
response revealed the complete hidden message "GOOD VIBES ONLY" using various elements in the
image, such as palm trees, greenery in the buildings, and darker patches of water. Therefore,
the assistant response did not correctly match the ground truth.
Rating: 2 / 5
Evaluator
Figure 1: An example from our Vibe-Eval benchmark. This prompt is from the hard-set, containing only
difficultprompts. Theuserpromptandimageisshownontheleft,andontherightisthehuman-written
golden reference response, a generation from Reka Core, and output from the automatic evaluator.
Abstract
We introduce Vibe-Eval: a new open benchmark and framework for evaluating multimodal chat models.
Vibe-Eval consists of 269 visual understanding prompts, including 100 of harddifficulty, and complete with
gold-standardresponsesauthoredbyexperts. Vibe-Evalisopen-endedandchallengingwithdualobjectives:
(i) vibe checking multimodal chat models for day-to-day tasks and (ii) deeply challenging and probing the
capabilities of present frontier models. To this end, our hard set contains >50%questions that allfrontier
models answerincorrectly. We explorethe nuancesof designing, evaluating,and ranking modelson ultra
challenging prompts. We also discuss trade-offs between human and automatic evaluation, and show that
automaticmodelevaluationusingRekaCoreroughlycorrelatestohumanjudgment. WehavemadeAPI
access free-of-charge for the purpose of lightweight evaluation with the intention to run formal human
evaluations for public models that do well on the Vibe-Eval automatic scores. We release the evaluation
code and data at github.com/reka-ai/reka-vibe-eval.
∗Equal contribution |
10.1016.j.cell.2023.12.012.pdf | Article
Human fetal brain self-organizes into long-term
expanding organoids
Graphical abstract
Highlights
dHuman fetal brain organoids (FeBOs) display cellular
heterogeneity and can be expanded
dFeBOs produce a tissue-like ECM niche and enable ECMperturbation studies
dDerivation of regional FeBOs allows the study of regionalmorphogen effects
dCRISPR-engineered FeBOs are a scalable bottom-up tumormodeling platformAuthors
Delilah Hendriks, Anna Pagliaro,Francesco Andreatta, ..., Wei Wu,Hans Clevers, Benedetta Artegiani
Correspondence
[email protected] (D.H.),
[email protected] (H.C.),b.a.artegiani@prinsesmaximacentrum.
nl (B.A.)
In brief
Tissue-derived human fetal brain
organoids (FeBOs) are established from
different anatomical regions that can beexpanded in culture. FeBOs may helpunderstand developmental and disease-related biology.
Hendriks et al., 2024, Cell 187, 712–732
February 1, 2024 ª2023 The Author(s). Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.012 ll
|
2009.01325v3.pdf | Learning to summarize from human feedback
Nisan Stiennon∗Long Ouyang∗Jeff Wu∗Daniel M. Ziegler∗Ryan Lowe∗
Chelsea Voss∗Alec Radford Dario Amodei Paul Christiano∗
OpenAI
Abstract
As language models become more powerful, training and evaluation are increas-
ingly bottlenecked by the data and metrics used for a particular task. For example,
summarization models are often trained to predict human reference summaries and
evaluated using ROUGE, but both of these metrics are rough proxies for what we
really care about—summary quality. In this work, we show that it is possible to
significantly improve summary quality by training a model to optimize for human
preferences. We collect a large, high-quality dataset of human comparisons be-
tween summaries, train a model to predict the human-preferred summary, and use
that model as a reward function to fine-tune a summarization policy using reinforce-
ment learning. We apply our method to a version of the TL;DR dataset of Reddit
posts [ 63] and find that our models significantly outperform both human reference
summaries and much larger models fine-tuned with supervised learning alone. Our
models also transfer to CNN/DM news articles [ 22], producing summaries nearly
as good as the human reference without any news-specific fine-tuning.2We con-
duct extensive analyses to understand our human feedback dataset and fine-tuned
models.3We establish that our reward model generalizes to new datasets, and that
optimizing our reward model results in better summaries than optimizing ROUGE
according to humans. We hope the evidence from our paper motivates machine
learning researchers to pay closer attention to how their training loss affects the
model behavior they actually want.
1 Introduction
Large-scale language model pretraining has become increasingly prevalent for achieving high per-
formance on a variety of natural language processing (NLP) tasks. When applying these models
to a specific task, they are usually fine-tuned using supervised learning, often to maximize the log
probability of a set of human demonstrations.
While this strategy has led to markedly improved performance, there is still a misalignment between
this fine-tuning objective—maximizing the likelihood of human-written text—and what we care
about—generating high-quality outputs as determined by humans. This misalignment has several
causes: the maximum likelihood objective has no distinction between important errors (e.g. making
up facts [ 41]) and unimportant errors (e.g. selecting the precise word from a set of synonyms); models
∗This was a joint project of the OpenAI Reflection team. Author order was randomized amongst {LO, JW,
DZ, NS}; CV and RL were full-time contributors for most of the duration. PC is the team lead.
2Samples from all of our models can be viewed on our website.
3We provide inference code for our 1.3B models and baselines, as well as a model card and our human
feedback dataset with over 64k summary comparisons, here.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.arXiv:2009.01325v3 [cs.CL] 15 Feb 2022 |
2306.04751.pdf | How Far Can Camels Go? Exploring the State of
Instruction Tuning on Open Resources
Yizhong Wang∗♣♠Hamish Ivison∗♣Pradeep Dasigi♣Jack Hessel♣
Tushar Khot♣Khyathi Raghavi Chandu♣David Wadden♣Kelsey MacMillan♣
Noah A. Smith♣♠Iz Beltagy♣Hannaneh Hajishirzi♣♠
♣Allen Institute for AI♠University of Washington
{yizhongw,hamishi}@allenai.org
Abstract
In this work we explore recent advances in instruction-tuning language models on
a range of open instruction-following datasets. Despite recent claims that open
modelscanbeonparwithstate-of-the-artproprietarymodels,theseclaimsareoften
accompaniedbylimitedevaluation,makingitdifficulttocomparemodelsacross
the board and determine the utility of various resources. We provide a large set of
instruction-tunedmodelsfrom6.7Bto65Bparametersinsize,trainedon12instruc-
tion datasets ranging from manually curated (e.g., OpenAssistant) to synthetic and
distilled(e.g.,Alpaca)andsystematicallyevaluatethemontheirfactualknowledge,
reasoning, multilinguality, coding, safety, and open-ended instruction following
abilities through a collection of automatic, model-based, and human-based metrics.
We further introduce TÜLU
,our bestperforminginstruction-tuned modelsuite
finetuned on a combination of high-quality open resources.
Our experiments show that different instruction-tuning datasets can uncover or
enhance specific skills, while no single dataset (or combination) provides the best
performanceacrossallevaluations. Interestingly,wefindthatmodelandhuman
preference-basedevaluationsfailtoreflectdifferencesinmodelcapabilitiesexposed
by benchmark-based evaluations, suggesting the need for the type of systemic
evaluation performed in this work. Our evaluations show that the best model in
anygivenevaluationreachesonaverage87%ofChatGPTperformance,and73%
ofGPT-4performance,suggestingthatfurtherinvestmentin buildingbetterbase
models and instruction-tuning data is required to close the gap. We release our
instruction-tuned models, including a fully finetuned 65B TÜLU
, along with our
code, data, and evaluation framework to facilitate future research.2
1 Introduction
Thelatestgenerationoflargelanguagemodelshasbroughtunprecedentedattentiontothepotentialof
language technologies. To support imperative user requests and a chat interface, these models often
undergoan instruction-tuning stepwhich involves trainingonsupervised input/outputpairs. Recent
instructiontuningcorporaareoftengatheredviacrowdsourcing(Dolly[ 12],OpenAssistant[ 26])
or via distillation from another model (Alpaca [ 43], Vicuna [ 8]). However, while some public,
instruction-tuned models are advertised as comparable to powerful closed-source proprietary models
such as ChatGPT, most experiments that support such claims only cover a small set of tasks, and
mostlyrely onmodel-based evaluationmetrics [ 8,56]. We contendthatthe evaluationsetup should
∗Equal contribution.
2https://github.com/allenai/open-instruct
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks.arXiv:2306.04751v2 [cs.CL] 30 Oct 2023 |
2403.19887.pdf | Jamba:
A Hybrid Transformer-Mamba Language Model
Opher Lieber∗Barak Lenz∗Hofit Bata Gal Cohen Jhonathan Osin
Itay Dalmedigos Erez Safahi Shaked Meirom Yonatan Belinkov
Shai Shalev-Shwartz Omri Abend Raz Alon Tomer Asida
Amir Bergman Roman Glozman Michael Gokhman Avashalom Manevich
Nir Ratner Noam Rozen Erez Shwartz Mor Zusman Yoav Shoham
Abstract
We present Jamba, a new base large language model based on a novel hybrid
Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba
interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both
model families. MoE is added in some of these layers to increase model capacity
while keeping active parameter usage manageable. This flexible architecture allows
resource- and objective-specific configurations. In the particular configuration we
have implemented, we end up with a powerful model that fits in a single 80GB
GPU. Built at large scale, Jamba provides high throughput and small memory
footprint compared to vanilla Transformers, and at the same time state-of-the-art
performance on standard language model benchmarks and long-context evaluations.
Remarkably, the model presents strong results for up to 256K tokens context length.
We study various architectural decisions, such as how to combine Transformer and
Mamba layers, and how to mix experts, and show that some of them are crucial
in large scale modeling. We also describe several interesting properties of these
architectures which the training and evaluation of Jamba have revealed, and plan to
release checkpoints from various ablation runs, to encourage further exploration
of this novel architecture. We make the weights of our implementation of Jamba
publicly available under a permissive license.
Model: https://huggingface.co/ai21labs/Jamba-v0.1
1 Introduction
We introduce Jamba, a new publicly available large language model. Jamba is based on a novel
hybrid architecture, which combines Transformer layers [ 46] with Mamba layers [ 16], a recent
state-space model [ 17,18], as well as a mixture-of-experts (MoE) component [ 13,41]. Jamba thus
combines two orthogonal architectural designs that together give it improved performance and higher
throughput, while maintaining a manageable memory footprint. The 7B-based Jamba model (12B
active parameters, 52B total available parameters) we are releasing was designed to fit in a single
80GB GPU, but the Jamba architecture supports other design choices, depending on one’s hardware
and performance requirements.
∗Equal contribution.arXiv:2403.19887v1 [cs.CL] 28 Mar 2024 |
20-302.pdf | Journal of Machine Learning Research 22 (2021) 1-35 Submitted 3/20; Revised 10/20; Published 3/21
Attention is Turing Complete
Jorge P´ erez [email protected]
Department of Computer Science
Universidad de Chile
IMFD Chile
Pablo Barcel´ o [email protected]
Institute for Mathematical and Computational Engineering
School of Engineering, Faculty of Mathematics
Universidad Cat´ olica de Chile
IMFD Chile
Javier Marinkovic [email protected]
Department of Computer Science
Universidad de Chile
IMFD Chile
Editor: Luc de Raedt
Abstract
Alternatives to recurrent neural networks, in particular, architectures based on self-attention ,
are gaining momentum for processing input sequences. In spite of their relevance, the com-
putational properties of such networks have not yet been fully explored. We study the
computational power of the Transformer , one of the most paradigmatic architectures ex-
emplifying self-attention. We show that the Transformer with hard-attention is Turing
complete exclusively based on their capacity to compute and access internal dense repre-
sentations of the data. Our study also reveals some minimal sets of elements needed to
obtain this completeness result.
Keywords: Transformers, Turing completeness, self-Attention, neural networks, arbi-
trary precision
1. Introduction
There is an increasing interest in designing neural network architectures capable of learning
algorithms from examples (Graves et al., 2014; Grefenstette et al., 2015; Joulin and Mikolov,
2015; Kaiser and Sutskever, 2016; Kurach et al., 2016; Dehghani et al., 2018). A key
requirement for any such an architecture is thus to have the capacity of implementing
arbitrary algorithms, that is, to be Turing complete . Most of the networks proposed for
learning algorithms are Turing complete simply by definition, as they can be seen as a
control unit with access to an unbounded memory; as such, they are capable of simulating
any Turing machine.
On the other hand, the work by Siegelmann and Sontag (1995) has established a dif-
ferent way of looking at the Turing completeness of neural networks. In particular, their
work establishes that recurrent neural networks (RNNs) are Turing complete even if only a
bounded number of resources (i.e., neurons and weights) is allowed. This is based on two
c⃝2021 Jorge P´ erez, Pablo Barcel´ o, Javier Marinkovic.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/ . Attribution requirements are provided
athttp://jmlr.org/papers/v22/20-302.html . |
Pretrained Transformers for Text Ranking: BERT and Beyond.pdf | Pretrained Transformers for Text Ranking:
BERT and Beyond
Jimmy Lin,1Rodrigo Nogueira,1and Andrew Yates2,3
1David R. Cheriton School of Computer Science, University of Waterloo
2University of Amsterdam
3Max Planck Institute for Informatics
Version 0.99 — August 20, 2021
Abstract
The goal of text ranking is to generate an ordered list of texts retrieved from a
corpus in response to a query for a particular task. Although the most common
formulation of text ranking is search, instances of the task can also be found
in many text processing applications. This survey provides an overview of text
ranking with neural network architectures known as transformers, of which BERT
is the best-known example. The combination of transformers and self-supervised
pretraining has been responsible for a paradigm shift in natural language processing
(NLP), information retrieval (IR), and beyond. For text ranking, transformer-based
models produce high quality results across many domains, tasks, and settings.
This survey provides a synthesis of existing work as a single point of entry for
practitioners who wish to deploy transformers for text ranking and researchers who
wish to pursue work in this area. We cover a wide range of techniques, grouped
into two categories: transformer models that perform reranking in multi-stage
architectures and dense retrieval techniques that perform ranking directly. Examples
in the first category include approaches based on relevance classification, evidence
aggregation from multiple segments of text, and document and query expansion.
The second category involves using transformers to learn dense representations of
texts, where ranking is formulated as comparisons between query and document
representations that take advantage of nearest neighbor search.
At a high level, there are two themes that pervade our survey: techniques for
handling long documents, beyond typical sentence-by-sentence processing in NLP,
and techniques for addressing the tradeoff between effectiveness (i.e., result quality)
and efficiency (e.g., query latency, model and index size). Much effort has been
devoted to developing ranking models that address the mismatch between document
lengths and the length limitations of existing transformers. The computational
costs of inference with transformers has led to alternatives and variants that aim
for different tradeoffs, both within multi-stage architectures as well as with dense
learned representations.
Although transformer architectures and pretraining techniques are recent innova-
tions, many aspects of how they are applied to text ranking are relatively well
understood and represent mature techniques. However, there remain many open
research questions, and thus in addition to laying out the foundations of pretrained
transformers for text ranking, this survey also attempts to prognosticate where the
field is heading.arXiv:2010.06467v3 [cs.IR] 19 Aug 2021 |
2212.10560.pdf | SELF-INSTRUCT : Aligning Language Model
with Self Generated Instructions
Yizhong Wang♣Yeganeh Kordi♢Swaroop Mishra♡Alisa Liu♣
Noah A. Smith♣+Daniel Khashabi♠Hannaneh Hajishirzi♣+
♣University of Washington♢Tehran Polytechnic♡Arizona State University
♠Johns Hopkins University+Allen Institute for AI
[email protected]
Abstract
Large “instruction-tuned” language models
(finetuned to respond to instructions) have
demonstrated a remarkable ability to gener-
alize zero-shot to new tasks. Nevertheless,
theydependheavilyonhuman-writteninstruc-
tion data that is limited in quantity, diver-
sity, and creativity, therefore hindering the
generality of the tuned model. We intro-
duce SELF-INSTRUCT , a framework for im-
proving the instruction-following capabilities
of pretrained language models by bootstrap-
ping off its own generations. Our pipeline
generates instruction, input, and output sam-
ples from a language model, then prunes
them before using them to finetune the orig-
inal model. Applying our method to vanilla
GPT3, we demonstrate a 33% absolute im-
provement over the original model on S UPER-
NATURALINSTRUCTIONS , on par with the per-
formanceofInstructGPT0011,whichistrained
with private user data and human annotations.
For further evaluation, we curate a set of
expert-writteninstructionsfornoveltasks,and
show through human evaluation that tuning
GPT3withS ELF-INSTRUCT outperformsusing
existing public instruction datasets by a large
margin,leavingonlya5%absolutegapbehind
InstructGPT001. SELF-INSTRUCT provides an
almostannotation-freemethodforaligningpre-
trainedlanguagemodelswithinstructions,and
we release our large synthetic dataset to facili-
tate future studies on instruction tuning2.
1 Introduction
TherecentNLPliteraturehaswitnessedatremen-
dous amount of activity in building models that
1Unless otherwise specified, our comparisons are with the
text-davinci-001 engine. We focus on this engine since it
istheclosesttoourexperimentalsetup: supervisedfine-tuning
with human demonstrations. The newer engines are more
powerful,thoughtheyusemoredata(e.g.,codecompletionor
latest user queries) oralgorithms(e.g., PPO) that are difficult
to compare with.
2Code and data will be available at https://github.
com/yizhongw/self-instruct .can follow natural language instructions (Mishra
et al., 2022; Wei et al., 2022; Sanh et al., 2022;
Wangetal.,2022;Ouyangetal.,2022;Chungetal.,
2022, i.a.). These developments are powered by
two key components: large pre-trained language
models(LM)andhuman-writteninstructiondata.
PROMPTSOURCE(Bach et al., 2022) and SUPER-
NATURALINSTRUCTIONS (Wang et al., 2022) are
two notable recent datasets that use extensive man-
ual annotation for collecting instructions to con-
structT0(Bach et al., 2022; Sanh et al., 2022) and
Tk-INSTRUCT (Wangetal.,2022). However,this
process is costly and often suffers limited diver-
sitygiventhatmosthumangenerationstendtobe
popular NLP tasks, falling short of covering a true
varietyoftasksanddifferentwaystodescribethem.
Given these limitations, continuing to improve the
qualityofinstruction-tunedmodelsnecessitatesthe
developmentofalternativeapproachesforsupervis-
ing instruction-tuned models.
In this work, we introduce SELF-INSTRUCT , a
semi-automated process for instruction-tuning a
pretrained LM using instructional signals from the
model itself. The overall process is an iterative
bootstrappingalgorithm(seeFigure1),whichstarts
offwithalimited(e.g.,175inourstudy)seedset
of manually-written instructions that are used to
guide the overallgeneration. In the first phase, the
model is prompted to generate instructions for new
tasks. This step leverages the existing collection of
instructions to create more broad-coverage instruc-
tionsthatdefine(oftennew)tasks. Giventhenewly-
generated set of instructions, the framework also
creates input-output instances for them, which can
be later used for supervising the instruction tuning.
Finally, various measures are used to prune low-
quality and repeated instructions, before adding
themtothetaskpool. Thisprocesscanberepeated
formanyinteractionsuntilreachingalargenumber
of tasks.
Toevaluate SELF-INSTRUCT empirically,werunarXiv:2212.10560v1 [cs.CL] 20 Dec 2022 |
2310.18313.pdf | FP8-LM: Training FP8 Large Language Models
Houwen Peng∗Kan Wu∗Yixuan Wei∗
Guoshuai Zhao Yuxiang Yang Ze Liu Yifan Xiong Ziyue Yang
Bolin Ni Jingcheng Hu Ruihang Li Miaosen Zhang Chen Li Jia Ning Ruizhe Wang Zheng Zhang
Shuguang Liu Joe Chau Han Hu†Peng Cheng†
Microsoft Azure and Microsoft Research
Abstract
In this paper, we explore FP8 low-bit data formats for efficient training of large language
models (LLMs). Our key insight is that most variables, such as gradients and optimizer
states,inLLMtrainingcanemploylow-precisiondataformatswithoutcompromisingmodel
accuracy and requiring no changes to hyper-parameters. Specifically, we propose a new FP8
automatic mixed-precision framework for training LLMs. This framework offers three levels
ofFP8utilizationto streamlinemixed-precisionanddistributedparalleltrainingfor LLMs.
It gradually incorporates 8-bit gradients, optimizer states, and distributed learning in an
incremental manner. Experiment results show that, during the training of GPT-175B model
onH100GPUplatform,ourFP8mixed-precisiontrainingframeworknotonlyachieveda
remarkable 42% reduction in real memory usage but also ran 64% faster than the widely
adopted BF16 framework (i.e., Megatron-LM), surpassing the speed of Nvidia Transformer
Engine by 17%. This largely reduces the training costs for large foundation models. Fur-
thermore,ourFP8mixed-precisiontrainingmethodologyisgeneric. Itcanbeseamlessly
applied to other tasks such as LLM instruction tuning and reinforcement learning with
human feedback, offering savings in fine-tuning expenses. Our FP8 low-precision training
framework is open-sourced at aka.ms/MS.AMP.
1 Introduction
Largelanguagemodels(LLMs)(Brownetal.,2020;Smithetal.,2022;Chowdheryetal.,2022; Zhangetal.,
2022) have demonstrated unprecedented capabilities in language comprehension and generation, leading to
breakthroughs in reasoning, math, science, and many other tasks (OpenAI, 2023; Anil et al., 2023). However,
training LLMs is extremely costly. For example, PaLM takes 6,144 TPUv4 chips to train a 540B model, while
GPT-3175Bconsumesseveralthousandpetaflop/s-daysofcomputeforpre-training(Chowdheryetal.,2022;
Brownetal.,2020). ThismotivatestheneedsofreducingthetrainingcostsofLLMs,especiallyforthescaling
of next-generation super-intelligent models.
Low-precision training is one of the most promising directions to reduce the costs, as it can provide high
speed, small memory footprint, and low communication overhead. Most existing training systems, e.g.,
Megatron-LM(Shoeybietal.,2019),MetaSeq(Zhangetal.,2022),andColossal-AI(Lietal.,2023a),train
LLMswitheitherFP32full-precisionorFP16/BF16mixed-precisionbydefault. Thisisnotessential,however,
Contributions for all the authors can be found in Section 5.
* equal work†contact: {hanhu | pengc}@microsoft.comarXiv:2310.18313v1 [cs.LG] 27 Oct 2023 |
2307.10169.pdf | Challenges and Applications of Large Language Models
Jean Kaddourα,†,∗, Joshua Harrisβ,∗, Maximilian Mozesα,
Herbie Bradleyγ,δ,ϵ, Roberta Raileanuζ, and Robert McHardyη,∗
αUniversity College LondonβUK Health Security AgencyγEleutherAI
δUniversity of CambridgeϵStability AIζMeta AI ResearchηInstaDeep
Abstract
Large Language Models (LLMs) went from
non-existent to ubiquitous in the machine learn-
ing discourse within a few years. Due to the
fast pace of the field, it is difficult to identify
the remaining challenges and already fruitful
application areas. In this paper, we aim to es-
tablish a systematic set of open problems and
application successes so that ML researchers
can comprehend the field’s current state more
quickly and become productive.
Contents
1 Introduction 1
2 Challenges 2
2.1 Unfathomable Datasets . . . . . . 2
2.2 Tokenizer-Reliance . . . . . . . . 4
2.3 High Pre-Training Costs . . . . . 6
2.4 Fine-Tuning Overhead . . . . . . 10
2.5 High Inference Latency . . . . . . 11
2.6 Limited Context Length . . . . . . 14
2.7 Prompt Brittleness . . . . . . . . 17
2.8 Hallucinations . . . . . . . . . . . 19
2.9 Misaligned Behavior . . . . . . . 22
2.10 Outdated Knowledge . . . . . . . 27
2.11 Brittle Evaluations . . . . . . . . 27
2.12 Evaluations Based on Static,
Human-Written Ground Truth . . 28
2.13 Indistinguishability between Gen-
erated and Human-Written Text . 29
2.14 Tasks Not Solvable By Scale . . . 30
2.15 Lacking Experimental Designs . . 31
2.16 Lack of Reproducibility . . . . . . 33
3 Applications 34
3.1 Chatbots . . . . . . . . . . . . . . 34
3.2 Computational Biology . . . . . . 36
3.3 Computer Programming . . . . . 37
*Equal contribution.
†{jean.kaddour,robert.mchardy}[email protected],
[email protected]
Design
Unfathomable
Datasets,
Tokenizer-Reliance,
Fine-Tuning
Overhead
Science
Evaluations
Based
on
Static
Human-Written
Ground
Truth,
Lacking
Experimental
Designs,
Lack
of
Reproducibility
Behavior
Prompt
Brittleness,
Misaligned
Behavior,
Outdated
Knowledge
Detecting
Generated
Texts,
Brittle
Evaluations
High
Pre-Training
Costs
High
Inference
Latency,
Limited
Context
Length,
Hallucinations
Tasks
Not
Solvable
By
ScaleFigure 1: Overview of LLM Challenges. Design ing
LLMs relates to decisions taken before deployment. Be-
havior ial challenges occur during deployment. Science
challenges hinder academic progress.
3.4 Creative Work . . . . . . . . . . . 39
3.5 Knowledge Work . . . . . . . . . 40
3.6 Law . . . . . . . . . . . . . . . . 42
3.7 Medicine . . . . . . . . . . . . . 43
3.8 Reasoning . . . . . . . . . . . . . 44
3.9 Robotics and Embodied Agents . . 45
3.10 Social Sciences & Psychology . . 46
3.11 Synthetic Data Generation . . . . 48
4 Related Work 49
5 Conclusion 49
1 Introduction
Given the quickly growing plethora of LLM re-
search papers, we aim to address two questions: (1)
Challenges : What problems remain unresolved?
and (2) Applications : Where are LLMs currently
being applied, and how are the challenges con-
straining them? For (1), we group the challenges
1arXiv:2307.10169v1 [cs.CL] 19 Jul 2023 |
2401.01325.pdf | LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
Hongye Jin1 *Xiaotian Han1 *Jingfeng Yang2Zhimeng Jiang1Zirui Liu3Chia-Yuan Chang1
Huiyuan Chen4Xia Hu3
Abstract
This work elicits LLMs’ inherent ability to handle
long contexts without fine-tuning. The limited
length of the training sequence during training
may limit the application of Large Language Mod-
els (LLMs) on long input sequences for inference.
In this work, we argue that existing LLMs them-
selves have inherent capabilities for handling long
contexts. Based on this argument, we suggest ex-
tending LLMs’ context window by themselves to
fully utilize the inherent ability.We propose Self-
Extend to stimulate LLMs’ long context handling
potential. The basic idea is to construct bi-level
attention information: the group level and the
neighbor level. The two levels are computed by
the original model’s self-attention, which means
the proposed does not require any training. With
only four lines of code modification, the proposed
method can effortlessly extend existing LLMs’
context window without any fine-tuning. We con-
duct comprehensive experiments and the results
show that the proposed method can effectively
extend existing LLMs’ context window’s length.
1. Introduction
The context window length of most existing LLMs is lim-
ited since they are trained with a fixed length of training
sequences. It’s determined by the context window length
during the pretraining stage. Once the length of the input
texts exceeds the pretraining context window during the
inference, the behavior of LLMs will be unpredictable and
suffer from severe performance degradation, which is shown
on the perplexity (PPL) metric, the PPL of the model will
explode with the long input sequence (Xiao et al., 2023;
Peng et al., 2023b; Han et al., 2023; Chen et al., 2023b).
*Equal contribution1Texas A&M University2Amazon, the
views expressed or the conclusions reached are his own and
do not represent the view of Amazon3Rice University4Case
Western Reserve University. Correspondence to: Hongye Jin
<[email protected] >.
Working in Progress.Recently, a variety of content window extension methods
have been developed to tackle the challenge of extending
the context window size of pretrained LLMs. A common
and straightforward approach is to fine-tune these models
on enough extensive texts (Xiong et al., 2023). Besides this,
some methods seek to achieve extension either without the
need for fine-tuning or with only minimal fine-tuning, offer-
ing a more efficient alternative. Among these contemporary
methods, some notable techniques include ’PI’ (Chen et al.,
2023b), ’CLEX’ (Chen et al., 2023a) and ’Yarn’ (Peng et al.,
2023b). However, they typically necessitate a few fine-
tuning steps to achieve extension, which could be resource-
intensive and time-consuming. And these methods aim to
extend the content window based on the assumption that
pretrained LLMs lack the ability to handle long content .
Thus, limited finetuning could make LLMs overfit to specific
long sequences, which lacks generalizability over out-of-
distribution long sequences and loses performance on short
sequences obtained during pretraining. On the other hand,
some methods (Xiao et al., 2023; Han et al., 2023; Ding
et al., 2023) aim to avoid fine-tuning. These fine-tuning-
free approaches rely predominantly on local information
in the sequence. However, these methods may fall short in
effectively expanding the context window, as it is limited
to using only local tokens rather than expanding the overall
context-handling capacity of the LLMs. Consequently, they
may not fully realize the potential of extending the context
window in LLMs and have inferior performance.
Instead of extending the content window, in this paper, we
believe LLMs should have inherent capabilities to handle
long contexts . Our belief stems from the fact that when we,
as human beings, are children, we are taught how to read and
write using relatively short texts, such as articles spanning
several pages. We rarely use extremely long texts like entire
books or complete documents as learning materials. Yet,
we are still able to understand long texts effectively. With
this strong motivation, the poor performance of LLMs while
facing long text out of the pretraining context window size is
not due to the lack of long context understanding capabilities.
We suppose that there should be a way to elicit LLMs’
inherent long context capability.
In our analysis, we observe that the key challenge prevent-
ing LLMs from effectively managing extensive contexts is
1arXiv:2401.01325v1 [cs.CL] 2 Jan 2024 |
2304.11082.pdf | Preprint. Under review.
FUNDAMENTAL LIMITATIONS OF ALIGNMENT
INLARGE LANGUAGE MODELS
Yotam Wolf∗
The Hebrew University
[email protected] Wies∗
The Hebrew University
[email protected]
Yoav Levine
AI21 Labs
[email protected] Shashua
The Hebrew University
[email protected]
ABSTRACT
An important aspect in developing language models that interact with humans is
aligning their behavior to be useful and unharmful for their human users. This is
usually achieved by tuning the model in a way that enhances desired behaviors and
inhibits undesired ones, a process referred to as alignment . In this paper, we propose
a theoretical approach called Behavior Expectation Bounds (BEB) which allows us
to formally investigate several inherent characteristics and limitations of alignment
in large language models. Importantly, we prove that for any behavior that has
a finite probability of being exhibited by the model, there exist prompts that can
trigger the model into outputting this behavior, with probability that increases with
the length of the prompt. This implies that any alignment process that attenuates
undesired behavior but does not remove it altogether, is not safe against adversarial
prompting attacks. Furthermore, our framework hints at the mechanism by which
leading alignment approaches such as reinforcement learning from human feedback
increase the LLM’s proneness to being prompted into the undesired behaviors.
Moreover, we include the notion of personas in our BEB framework, and find that
behaviors which are generally very unlikely to be exhibited by the model can be
brought to the front by prompting the model to behave as specific persona. This
theoretical result is being experimentally demonstrated in large scale by the so
called contemporary “chatGPT jailbreaks", where adversarial users trick the LLM
into breaking its alignment guardrails by triggering it into acting as a malicious
persona. Our results expose fundamental limitations in alignment of LLMs and
bring to the forefront the need to devise reliable mechanisms for ensuring AI safety.
1 I NTRODUCTION
Training large language models (LLMs) over vast corpora has revolutionized natural language
processing, giving LLMs the ability to mimic human-like interactions and serve as general purpose
assistants in a wide variety of tasks, such as wide-scoped question answering, writing assistance,
teaching, and more (Radford et al., 2019; Devlin et al., 2019; Brown et al., 2020; Schulman et al.,
2023; OpenAI, 2023; Bubeck et al., 2023; Nori et al., 2023; West, 2023; Park et al., 2023). A growing
concern due to the increasing reliance on LLMs for such purposes is the harm they can cause their
users, such as feeding fake information (Lin et al., 2022; Weidinger et al., 2022), behaving offensively
and feeding social biases (Hutchinson et al., 2020; Venkit et al., 2022; Weidinger et al., 2022), or
encouraging problematic behaviors by users (even by psychologically manipulating them Roose
2023; Atillah 2023). Indeed, evidently, the unsupervised textual data used for pretraining modern
LLMs includes enough demonstrations of the above undesired behaviors for them to be present in the
resulting models (Bender et al., 2021). The act of removing these undesired behaviors is often called
alignment (Yudkowsky, 2001; Taylor et al., 2016; Amodei et al., 2016; Shalev-Shwartz et al., 2020;
Hendrycks et al., 2021; Pan et al., 2022; Ngo, 2022).
∗Equal contribution
1arXiv:2304.11082v1 [cs.CL] 19 Apr 2023 |
2302.04065.pdf | Monge, Bregman and Occam: Interpretable Optimal Transport in
High-Dimensions with Feature-Sparse Maps
Marco Cuturi1Michal Klein1Pierre Ablin1
Abstract
Optimal transport (OT) theory focuses, among
all mapsT:Rd→Rdthat can morph a prob-
ability measure onto another, on those that are
the “thriftiest”, i.e. such that the averaged cost
c(x,T(x))between xand its image T(x)be
as small as possible. Many computational ap-
proaches have been proposed to estimate such
Monge maps when cis theℓ2
2distance, e.g., us-
ing entropic maps (Pooladian and Niles-Weed,
2021), or neural networks (Makkuva et al., 2020;
Korotin et al., 2020). We propose a new model
for transport maps, built on a family of transla-
tion invariant costs c(x,y) :=h(x−y), where
h:=1
2∥·∥2
2+τandτis a regularizer. We pro-
pose a generalization of the entropic map suitable
forh, and highlight a surprising link tying it with
theBregman centroids of the divergence Dhgen-
erated byh, and the proximal operator of τ. We
show that choosing a sparsity-inducing norm for τ
results in maps that apply Occam ’s razor to trans-
port, in the sense that the displacement vectors
∆(x) :=T(x)−xthey induce are sparse, with
a sparsity pattern that varies depending on x. We
showcase the ability of our method to estimate
meaningful OT maps for high-dimensional single-
cell transcription data, in the 34000 -dspace of
gene counts for cells, without using dimensional-
ity reduction, thus retaining the ability to interpret
all displacements at the gene level.
1. Introduction
A fundamental task in machine learning is learning how to
transfer observations from a source to a target probability
measure. For such problems, optimal transport (OT) (San-
tambrogio, 2015) has emerged as a powerful toolbox that
can improve performance and guide theory in various set-
tings. For instance, the computational approaches advo-
cated in OT have been used to transfer knowledge across
1Apple.{cuturi,michalk,p ablin}@apple.comdatasets in domain adaptation tasks (Courty et al., 2016;
2017), train generative models (Montavon et al., 2016; Ar-
jovsky et al., 2017; Genevay et al., 2018; Salimans et al.,
2018), and realign datasets in natural sciences (Janati et al.,
2019; Schiebinger et al., 2019).
High-dimensional Transport. OT finds its most straight-
forward and intuitive use-cases in low-dimensional geomet-
ric domains (grids and meshes, graphs, etc...). This work
focuses on the more challenging problem of using it on
distributions in Rd, withd≫1. InRd, the ground cost
c(x,y)between observations x,yis often theℓ2metric or
its squareℓ2
2. However, when used on large- ddata samples,
that choice is rarely meaningful. This is due to the curse-of-
dimensionality associated with OT estimation (Dudley et al.,
1966; Weed and Bach, 2019) and the fact that the Euclidean
distance loses its discriminative power as dimension grows.
To mitigate this, practitioners rely on dimensionality reduc-
tion, either in two steps , before running OT solvers, using,
e.g., PCA, a V AE, or a sliced-Wasserstein approach (Rabin
et al., 2012; Bonneel et al., 2015); or jointly , by estimating
both a projection and transport, e.g., on hyperplanes (Niles-
Weed and Rigollet, 2022; Paty and Cuturi, 2019; Lin et al.,
2020; Huang et al., 2021; Lin et al., 2021), lines (Deshpande
et al., 2019; Kolouri et al., 2019), trees (Le et al., 2019) or
more advanced featurizers (Salimans et al., 2018). However,
an obvious drawback of these approaches is that transport
maps estimated in reduced dimensions are hard to interpret
in the original space (Muzellec and Cuturi, 2019).
Contributions. To target high dregimes, we introduce
a radically different approach. We use the sparsity tool-
box (Hastie et al., 2015; Bach et al., 2012) to build OT maps
that are, adaptively to input x, drastically simpler:
•We introduce a generalized entropic map (Pooladian
and Niles-Weed, 2021) for translation invariant costs
c(x,y) :=h(x−y), wherehis strongly convex. That
entropic map Th,εis defined almost everywhere (a.e.), and
we show that it induces displacements ∆(x) :=T(x)−x
that can be cast as Bregman centroids, relative to the
Bregman divergence generated by h.
•Whenhis an elastic-type regularizer, the sum of a
strongly-convex term ℓ2
2and a sparsifying norm τ, we
show that such centroids are obtained using the proximalarXiv:2302.04065v1 [stat.ML] 8 Feb 2023 |
2106.09685.pdf | LORA: L OW-RANK ADAPTATION OF LARGE LAN-
GUAGE MODELS
Edward Hu∗Yelong Shen∗Phillip Wallis Zeyuan Allen-Zhu
Yuanzhi Li Shean Wang Lu Wang Weizhu Chen
Microsoft Corporation
{edwardhu, yeshe, phwallis, zeyuana,
yuanzhil, swang, luw, wzchen }@microsoft.com
[email protected]
(Version 2)
ABSTRACT
An important paradigm of natural language processing consists of large-scale pre-
training on general domain data and adaptation to particular tasks or domains. As
we pre-train larger models, full fine-tuning, which retrains all model parameters,
becomes less feasible. Using GPT-3 175B as an example – deploying indepen-
dent instances of fine-tuned models, each with 175B parameters, is prohibitively
expensive. We propose Low-RankAdaptation, or LoRA, which freezes the pre-
trained model weights and injects trainable rank decomposition matrices into each
layer of the Transformer architecture, greatly reducing the number of trainable pa-
rameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam,
LoRA can reduce the number of trainable parameters by 10,000 times and the
GPU memory requirement by 3 times. LoRA performs on-par or better than fine-
tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite hav-
ing fewer trainable parameters, a higher training throughput, and, unlike adapters,
no additional inference latency . We also provide an empirical investigation into
rank-deficiency in language model adaptation, which sheds light on the efficacy of
LoRA. We release a package that facilitates the integration of LoRA with PyTorch
models and provide our implementations and model checkpoints for RoBERTa,
DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA .
1 I NTRODUCTION
Pretrained
Weights
𝑊∈ℝ𝑑×𝑑
xh
𝐵=0
𝐴=𝒩(0,𝜎2)
𝑑𝑟Pretrained
Weights
𝑊∈ℝ𝑑×𝑑
xf(x)
𝑑
Figure 1: Our reparametriza-
tion. We only train AandB.Many applications in natural language processing rely on adapt-
ingonelarge-scale, pre-trained language model to multiple down-
stream applications. Such adaptation is usually done via fine-tuning ,
which updates all the parameters of the pre-trained model. The ma-
jor downside of fine-tuning is that the new model contains as many
parameters as in the original model. As larger models are trained
every few months, this changes from a mere “inconvenience” for
GPT-2 (Radford et al., b) or RoBERTa large (Liu et al., 2019) to a
critical deployment challenge for GPT-3 (Brown et al., 2020) with
175 billion trainable parameters.1
Many sought to mitigate this by adapting only some parameters or
learning external modules for new tasks. This way, we only need
to store and load a small number of task-specific parameters in ad-
dition to the pre-trained model for each task, greatly boosting the
operational efficiency when deployed. However, existing techniques
∗Equal contribution.
0Compared to V1, this draft includes better baselines, experiments on GLUE, and more on adapter latency.
1While GPT-3 175B achieves non-trivial performance with few-shot learning, fine-tuning boosts its perfor-
mance significantly as shown in Appendix A.
1arXiv:2106.09685v2 [cs.CL] 16 Oct 2021 |
10.2307.2334029.pdf | A note on DPO with noisy preferences & relationship to IPO
Eric Mitchell
November 25, 2023 (v1.1)
‘OG’ RLHF aims for reward maximization with a KL constraint to reference model πref(inputs xomitted):
π∗= argmax
πEy∼π
r(y)−βlogπ(y)
πref(y)
(1)
DPO [3] derives a loss on the current policy πθ(where our dataset says ywis preferred to yl, oryw≻yl):
LDPO(θ, yw, yl) =−logσ
βlogπθ(yw)
πref(yw)−βlogπθ(yl)
πref(yl)
, (2)
i.e., the binary cross entropy with ˆ pθ(yw≻yl) =σ
βlogπθ(yw)
πref(yw)−βlogπθ(yl)
πref(yl)
and target p(yw≻yl) = 1.
What if preference labels are noisy? Say the labels have been flipped with some small probability
ϵ∈(0,0.5). We can use a conservative target distribution instead, p(yw≻yl) = 1−ϵ, giving BCE loss:
Lϵ
DPO(θ, yw, yl) =−(1−ϵ) log ˆpθ(yw≻yl)−ϵlog(1−ˆpθ(yw≻yl)) (3)
= (1−ϵ)LDPO(θ, yw, yl) +ϵLDPO(θ, yl, yw) (4)
The gradient of Lϵ
DPO(θ, yw, yl) is simply the weighted sum of gradients (1 −ϵ)∇θL(θ, yw, yl)+ϵ∇θL(θ, yl, yw),
which reduces to the simplified form (ignoring constants; see [3] for the gradient of the original DPO loss):
∇θLϵ
DPO(θ, yw, yl) =−
(1−ϵ)(1−ˆpθ)−ϵˆpθh
∇θlogπθ(yw)|{z}
upweight yw−∇ θlogπθ(yl)|{z}
downweight yli
(5)
=
ˆpθ−(1−ϵ)h
∇θlogπθ(yw)− ∇ θlogπθ(yl)i
. (6)
The gradient is zero when ˆ pθ(yw≻yl) = (1 −ϵ), i.e., our (implicit) reward assigns the desired confidence
level in this training example under the Bradley-Terry model [2]. For normal DPO, the gradient is never
zero! Using the shorthand hyw,ylπθ= logπθ(yw)
πref(yw)−logπθ(yl)
πref(yl), let’s compare the conservative DPO (cDPO?)
and IPO [1] loss gradient, where the IPO loss is given in Eq. 17 of [1] as LIPO(θ, yw, yl) =
hyw,ylπ−1
2β2
:
∇θLIPO(θ, yw, yl) =
hyw,ylπθ−1
2βh
∇θlogπθ(yw)− ∇ θlogπθ(yl)i
(7)
∇θLϵ
DPO(θ, yw, yl) =
σ(βhyw,ylπθ)−(1−ϵ)h
∇θlogπθ(yw)− ∇ θlogπθ(yl)i
(8)
TL;DR: conservative DPO trains the model until a desired improvement in the implicit probability assigned
by the model to the observed preferences1is met; IPO trains the model until a desired improvement in implicit
reward is met. The ability for cDPO and IPO to optimize only to a fixed delta from the reference model and
then stop (or even reverse!) likely makes these more stable than the original DPO loss after lots of training.
[1] Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal Valko, and R´ emi Munos. A
General Theoretical Paradigm to Understand Learning from Human Preferences . 2023. arXiv: 2310.12036 [cs.AI] .
[2] Ralph Allan Bradley and Milton E. Terry. “Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons”.
In:Biometrika 39.3/4 (1952), pp. 324–345. doi:https://doi.org/10.2307/2334029 .
[3] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. “Direct Preference
Optimization: Your Language Model is Secretly a Reward Model”. In: Neural Information Processing Systems . 2023.
1The Bradley-Terry model of human preferences [2] converts the β-scaled reward gap hyw,ylπθto a probability assigned by
the model to the observed preference bit using the sigmoid of the scaled reward gap. |
10.1016.j.cell.2024.01.003.pdf | Leading Edge
Commentary
Structure is beauty, but not always truth
James S. Fraser1,*and Mark A. Murcko2,*
1Department of Bioengineering and Therapeutic Sciences, University of California San Francisco, San Francisco, CA, USA
2Disruptive Biomedical LLC, Holliston, MA, USA
*Correspondence: [email protected] (J.S.F.), [email protected] (M.A.M.)
https://doi.org/10.1016/j.cell.2024.01.003
Structural biology, as powerful as it is, can be misleading. We highlight four fundamental challenges: inter-
preting raw experimental data; accounting for motion; addressing the misleading nature of in vitro structures;
and unraveling interactions between drugs and ‘‘anti-targets.’’ Overcoming these challenges will amplify theimpact of structural biology on drug discovery.
Introduction
Amidst the many uncertainties that compli-
cate drug discovery, structural biology an-chors the process in beautiful and con-
crete images of drugs interacting with
receptors. Structure can enable the tack-ling of many of the key challenges of
drug design. Atomistic models emerging
from cryo-electron microscopy (cryo-EM),X-ray crystallography, and nuclear mag-
netic resonance (NMR) provide strong
starting points for thinking broadly andcreatively about how to modulate proteinfunction by identifying binding pockets
and potential allosteric sites. Ligand-
bound structures greatly focus the searchof chemical space to molecules that main-
tain key interactions with the receptor.
By offering a ‘‘ground truth,’’ structural
biology is clear, quantifiable, and inter-
pretable. In the best cases, the precise
location of every atom is clearly defined.For example, measuring the distances
between atoms on the ligand and recep-
tor allow us to infer that ‘‘this hydrogenbond is better than that one.’’ By compar-ison, the biology of the target is often quite
complex and difficult to model quantita-
tively. Cellular assays and animal modelsare approximations that do not fully reca-
pitulate the human disease process or the
potential of small molecules to inducetoxicities. In addition, medicinal chemistry
is full of uncertainty. Even with a structure,
in the course of inhibitor optimization, it isdifficult to know what molecules to makenext or how to synthesize them. Finally,
we struggle to understand why some mol-
ecules are more potent or more bioavail-able than others, making it next to impos-
sible to optimize the pharmacokinetic and
safety profile of a drug candidate.However, the ‘‘truth’’ of structural
biology raises a legitimate concern: does
the availability of structural information,such as a protein crystal structure, irrepa-
rably constrain the creative process? This
is a potential risk for those who fail torecognize the inherent limitations in the
structures and the new predictions (e.g.,
AlphaFold2)
1trained on the corpus of
the Protein DataBank (PDB). We suggest
it is useful to consider four kinds of limita-
tions and offer ways that the field canaddress each of them to optimize thevalue we derive from structural biology
and further improve the quality of predic-
tive modeling.
Four harsh truths about structural
biology and drug discovery
(1) A structure is a model, not experi-
mental reality.
(2) Representing wiggling and jiggling
is hard.
(3)In vitro can be deceiving.
(4) Drugs mingle with many different
receptors.
A structure is a model, not
experimental realityUndoubtedly, AlphaFold2
1shook up the
field of structural biology by ‘‘solving’’
the protein structure prediction problem.‘‘Solving’’ means that the predictedmodels are highly similar to ‘‘ground
truth’’ experimentally determined struc-
tures by the metrics used by the CASP(Critical Assessment of Structure Pre-
diction) competition, a community-wide
experiment to determine and advancethe state of the art in modeling protein
structure from amino acid sequence. It isimportant to note that ground truth struc-
tures contain inaccuracies beyond the
signal-to-noise of the experiments thatgenerate them. For example, in X-ray
crystallography, the experimental data is
measured very precisely (usually to lessthan 5% error), but the structures refined
against that data have large residual
errors compared to the experimentaldata (generally > /C2420%).
2Adding in prior
knowledge, such as geometry restraints,
is especially important as the resolutionof the experimental data gets worse.
2
‘‘Truth’’ therefore may not lie only in com-
parison to the atomic coordinates, espe-
cially when the structure is based onlow-resolution data. Rather, comparisons
to density maps (or even raw diffraction
images or micrographs) may reveal adeeper form of truth ( Figure 1 ).
Recent work has started the important
task of comparing AlphaFold2 modelsdirectly to experimental crystallographic
density maps.
3In many cases, predic-
tions closely matched experimentalmaps. Refinement of the AlphaFold2models against experimental data can
resolve some global scale distortion and
issues of domain orientation. Refinementalso improves local backbone and side-
chain conformations. However, most
very high-confidence predictions differedfrom experimental maps to a greater
extent than independently determined
experimental structures.
Beyond suggesting that direct agree-
ment with experimental data, not ‘‘struc-
tures,’’ could be a new benchmark of
‘‘ground truth,’’ these findings promptus to ask how we can maximize the utility
of computationally predicted models
in drug discovery. Some differences
ll
Cell 187, February 1, 2024 ª2024 Elsevier Inc. 517 |
2307.13304.pdf | QuIP: 2-Bit Quantization of
Large Language Models With Guarantees
Jerry Chee
Department of Computer Science
Cornell University
[email protected] Cai
Department of Electrical and
Computer Engineering
Cornell University
[email protected]
Volodymyr Kuleshov
Department of Computer Science
Cornell University
[email protected] De Sa
Department of Computer Science
Cornell University
[email protected]
Abstract
This work studies post-training parameter quantization in large language models
(LLMs). We introduce quantization with incoherence processing (QuIP), a new
method based on the insight that quantization benefits from incoherent weight and
Hessian matrices, i.e., from the weights and the directions in which it is important
to round them accurately being unaligned with the coordinate axes. QuIP consists
of two steps: (1) an adaptive rounding procedure minimizing a quadratic proxy
objective; (2) efficient pre- and post-processing that ensures weight and Hessian
incoherence via multiplication by random orthogonal matrices. We complement
QuIP with the first theoretical analysis for an LLM-scale quantization algorithm,
and show that our theory also applies to an existing method, OPTQ. Empirically,
we find that our incoherence preprocessing improves several existing quantization
algorithms and yields the first LLM quantization methods that produce viable
results using only two bits per weight. Our code can be found on GitHub.
1 Introduction
Large language models (LLMs) have enabled advances in text generation, few-shot learning, reason-
ing, protein sequence modeling, and other tasks [ 2,29,34]. The massive size of these models—often
reaching into hundreds of billions of parameters—requires sophisticated deployment methods and
motivates research into efficient inference algorithms.
This work studies the post-training quantization of LLM parameters as a way to improve their runtime
efficiency [ 4,8,22,30,32,33]. Our key insight is that quantization can be most effective when
weight and proxy Hessian matrices are incoherent —intuitively, both the weights themselves and
the directions in which it is important to have good rounding accuracy are not too large in any one
coordinate, which makes it easier to adaptively round the weights to a finite set of compressed values.
We use this intuition to develop theoretically sound two-bit quantization algorithms that scale to
LLM-sized models.
Specifically, we introduce quantization with incoherence processing (QuIP), a new method motivated
by the above insight. QuIP consists of two steps: (1) an adaptive rounding [ 20] procedure, which
minimizes a quadratic proxy objective ℓ(ˆW) = tr(( ˆW−W)H(ˆW−W)T)of the error between the
original weights Wand the quantized weights ˆWusing an estimate of the Hessian H; (2) efficient
Preprint. Under review.arXiv:2307.13304v1 [cs.LG] 25 Jul 2023 |
2203.02155.pdf | Training language models to follow instructions
with human feedback
Long Ouyang∗Jeff Wu∗Xu Jiang∗Diogo Almeida∗Carroll L. Wainwright∗
Pamela Mishkin∗Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray
John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens
Amanda Askell†Peter Welinder Paul Christiano∗†
Jan Leike∗Ryan Lowe∗
OpenAI
Abstract
Making language models bigger does not inherently make them better at following
a user’s intent. For example, large language models can generate outputs that
are untruthful, toxic, or simply not helpful to the user. In other words, these
models are not aligned with their users. In this paper, we show an avenue for
aligning language models with user intent on a wide range of tasks by fine-tuning
with human feedback. Starting with a set of labeler-written prompts and prompts
submitted through the OpenAI API, we collect a dataset of labeler demonstrations
of the desired model behavior, which we use to fine-tune GPT-3 using supervised
learning. We then collect a dataset of rankings of model outputs, which we use to
further fine-tune this supervised model using reinforcement learning from human
feedback. We call the resulting models InstructGPT . In human evaluations on
our prompt distribution, outputs from the 1.3B parameter InstructGPT model are
preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters.
Moreover, InstructGPT models show improvements in truthfulness and reductions
in toxic output generation while having minimal performance regressions on public
NLP datasets. Even though InstructGPT still makes simple mistakes, our results
show that fine-tuning with human feedback is a promising direction for aligning
language models with human intent.
1 Introduction
Large language models (LMs) can be “prompted” to perform a range of natural language process-
ing (NLP) tasks, given some examples of the task as input. However, these models often express
unintended behaviors such as making up facts, generating biased or toxic text, or simply not following
user instructions (Bender et al., 2021; Bommasani et al., 2021; Kenton et al., 2021; Weidinger et al.,
2021; Tamkin et al., 2021; Gehman et al., 2020). This is because the language modeling objective
∗Primary authors. This was a joint project of the OpenAI Alignment team. RL and JL are the team leads.
Corresponding author: [email protected] .
†Work done while at OpenAI. Current affiliations: AA: Anthropic; PC: Alignment Research Center.arXiv:2203.02155v1 [cs.CL] 4 Mar 2022 |
2305.14992.pdf | Reasoning with Language Model is
Planning with World Model
Shibo Hao∗♣Yi Gu∗ ∗♣Haodi Ma♢Joshua Jiahua Hong♣Zhen Wang♣ ♠
Daisy Zhe Wang♢Zhiting Hu♣
♣UC San Diego,♢University of Florida
♠Mohamed bin Zayed University of Artificial Intelligence
{s5hao, yig025, jjhong, zhw085, zhh019}@ucsd.edu
{ma.haodi, daisyw}@ufl.edu
Abstract
Large language models (LLMs) have shown remarkable reasoning capabilities,
especially when prompted to generate intermediate reasoning steps (e.g., Chain-of-
Thought, CoT). However, LLMs can still struggle with problems that are easy for
humans, such as generating action plans for executing tasks in a given environment,
or performing complex math, logical, and commonsense reasoning. The deficiency
stems from the key fact that LLMs lack an internal world model to predict the world
state (e.g., environment status, intermediate variable values) and simulate long-term
outcomes of actions. This prevents LLMs from performing deliberate planning akin
to human brains, which involves exploring alternative reasoning paths, anticipating
future states and rewards, and iteratively refining existing reasoning steps. To
overcome the limitations, we propose a new LLM reasoning framework, Reasoning
viaPlanning (RAP) . RAP repurposes the LLM as both a world model and a
reasoning agent, and incorporates a principled planning algorithm (based on Monto
Carlo Tree Search) for strategic exploration in the vast reasoning space. During
reasoning, the LLM (as agent) incrementally builds a reasoning tree under the
guidance of the LLM (as world model) and task-specific rewards, and obtains a
high-reward reasoning path efficiently with a proper balance between exploration
vs.exploitation. We apply RAP to a variety of challenging reasoning problems
including plan generation, math reasoning, and logical inference. Empirical results
on these tasks demonstrate the superiority of RAP over various strong baselines,
including CoT and least-to-most prompting with self-consistency. RAP on LLaMA-
33B surpasses CoT on GPT-4 with 33% relative improvement in a plan generation
setting.
1 Introduction
Large language models (LLMs) have exhibited emergent reasoning abilities in a wide range of
tasks [ 5,10,44,2]. Recent approaches further boost their ability by prompting LLMs to generate
intermediate reasoning steps (e.g., Chain-of-Thought, CoT [ 59]) or answer a series of subquestions
(e.g., least-to-most prompting [ 66]). However, LLMs still face difficulties with tasks that humans
find easy. For example, in creating action plans to move blocks to a target state, GPT-3 [ 5] achieves a
success rate of only 1%, compared to 78% for humans [ 57]; these models also struggle when solving
complex tasks that require multiple steps of math, logical, or commonsense reasoning [ 65,22,41,6].
Humans possess an internal world model , a mental representation of the environment [ 28,27,15],
which enables humans to simulate actions and their effects on the world’s state for deliberate planning
∗equal contribution
Preprint. Under review.arXiv:2305.14992v1 [cs.CL] 24 May 2023 |
1606.06565.pdf | Concrete Problems in AI Safety
Dario Amodei∗
Google BrainChris Olah∗
Google BrainJacob Steinhardt
Stanford UniversityPaul Christiano
UC Berkeley
John Schulman
OpenAIDan Man´ e
Google Brain
Abstract
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing atten-
tion to the potential impacts of AI technologies on society. In this paper we discuss one such
potential impact: the problem of accidents in machine learning systems, defined as unintended
and harmful behavior that may emerge from poor design of real-world AI systems. We present a
list of five practical research problems related to accident risk, categorized according to whether
the problem originates from having the wrong objective function (“avoiding side effects” and
“avoiding reward hacking”), an objective function that is too expensive to evaluate frequently
(“scalable supervision”), or undesirable behavior during the learning process (“safe exploration”
and “distributional shift”). We review previous work in these areas as well as suggesting re-
search directions with a focus on relevance to cutting-edge AI systems. Finally, we consider
the high-level question of how to think most productively about the safety of forward-looking
applications of AI.
1 Introduction
The last few years have seen rapid progress on long-standing, difficult problems in machine learning
and artificial intelligence (AI), in areas as diverse as computer vision [82], video game playing [102],
autonomous vehicles [86], and Go [140]. These advances have brought excitement about the positive
potential for AI to transform medicine [126], science [59], and transportation [86], along with concerns
about the privacy [76], security [115], fairness [3], economic [32], and military [16] implications of
autonomous systems, as well as concerns about the longer-term implications of powerful AI [27, 167].
The authors believe that AI technologies are likely to be overwhelmingly beneficial for humanity, but
we also believe that it is worth giving serious thought to potential challenges and risks. We strongly
support work on privacy, security, fairness, economics, and policy, but in this document we discuss
another class of problem which we believe is also relevant to the societal impacts of AI: the problem
of accidents in machine learning systems. We define accidents as unintended and harmful behavior
that may emerge from machine learning systems when we specify the wrong objective function, are
∗These authors contributed equally.
1arXiv:1606.06565v2 [cs.AI] 25 Jul 2016 |
10.1016.j.cell.2023.12.010.pdf | Article
Hypoxia and intra-complex genetic suppressors
rescue complex I mutants by a shared mechanism
Graphical abstract
Highlights
dHypoxia rescue and hyperoxia sensitivity of complex I
mutants are conserved in C. elegans
dHypoxia rescue is independent of HIF activation or
attenuation of ROS toxicity
dNDUFA6/nuo-3(G60D) mimics acute hypoxia in restoring
complex I forward activity
dResidues in the CoQ binding pocket are required for rescuebynuo-3(G60D) or hypoxiaAuthors
Joshua D. Meisel, Maria Miranda,
Owen S. Skinner, ..., Alexis A. Jourdain,Gary Ruvkun, Vamsi K. Mootha
Correspondence
[email protected] (G.R.),
[email protected] (V.K.M.)
In brief
C. elegans mutants harboring a defective
mitochondrial complex I are rescued byhypoxia or intra-complex geneticsuppressor mutation, achieved byincreasing forward flow of electronsthrough complex I and dependent onresidues surrounding the CoQ bindingpocket.
NADH
NAD+
CoQ
CoQH2 H+e-
NADH
NAD+
CoQ
CoQH2H+e-
O2 1% 50%
O2 rescue & O2 sensitivity of complex I
mutants is conserved in C. elegans
and independent of HIF, ROSmutagenize
50% O2
sequence
“hypoxia-mimetic”
NDUFA6/nuo-3(G60D)complex I mutantsrescue of
complex I
activity
timeG60DtimelowO
2
mutagenize
rotenone
sequenceNDUFA6/nuo-3(G60D)
“hypoxia-resistant”
complex I mutationsCoQCI mutant
O2
CI mutant; G60D or hypoxia
O2wt
CI
mutant
ETC
flux
ETC
flux
Meisel et al., 2024, Cell 187, 659–675
February 1, 2024 ª2023 The Author(s). Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2023.12.010 ll
|
2402.04362v2.pdf | Neural Networks Learn Statistics of Increasing Complexity
Nora Belrose1Quintin Pope2Lucia Quirke1Alex Mallen1Xiaoli Fern2
Abstract
The distributional simplicity bias (DSB) posits
that neural networks learn low-order moments
of the data distribution first, before moving on to
higher-order correlations. In this work, we present
compelling new evidence for the DSB by show-
ing that networks automatically learn to perform
well on maximum-entropy distributions whose
low-order statistics match those of the training
set early in training, then lose this ability later.
We also extend the DSB to discrete domains by
proving an equivalence between token n-gram fre-
quencies and the moments of embedding vectors,
and by finding empirical evidence for the bias in
LLMs. Finally we use optimal transport methods
to surgically edit the low-order statistics of one
class of images to match those of another, and
show early-training networks treat the edited im-
ages as if they were drawn from the target class.
Code is available at https://github.com/
EleutherAI/features-across-time .
1. Introduction
Neural networks exhibit a remarkable ability to fit com-
plex datasets while generalizing to unseen data points and
distributions. This is especially surprising given that deep
networks can perfectly fit random labels (Zhang et al., 2021),
and it is possible to intentionally “poison” networks so that
they achieve zero training loss while behaving randomly on
a held out test set (Huang et al., 2020).
A recently proposed explanation for this phenomenon is
thedistributional simplicity bias (DSB) : neural networks
learn to exploit the lower-order statistics of the input data
first– e.g. mean and (co)variance– before learning to use its
higher-order statistics, such as (co)skewness or (co)kurtosis.
Refinetti et al. (2023) provide evidence for the DSB by
training networks on a sequence of synthetic datasets that
act as increasingly precise approximations to the real data,
showing that early checkpoints perform about as well on
1EleutherAI2Oregon State University. Correspondence to:
Nora Belrose <[email protected] >.real data as checkpoints trained directly on the real data.
We build on Refinetti et al. (2023) by inverting their exper-
imental setup. We train our models on real datasets, then
test them throughout training on synthetic data that probe
the model’s reliance on statistics of different orders. We
believe this experimental design provides more direct evi-
dence about the generalization behavior of commonly used
models and training practices.
Our primary theoretical contributions are to (1)motivate the
DSB through a Taylor expansion of the expected loss, (2)
propose criteria quantifying whether a model “uses” statis-
tics up to order kby checking that the model is sensitive to
interventions on the first kstatistics, while being robust to
interventions on higher-order statistics, (3)describe efficient
methods of producing synthetic data that let us investigate
whether models satisfy the above criteria, and (4)extend the
DSB to discrete domains by proving an equivalence between
token n-gram frequencies and the moments of sequences of
embedding vectors.
We use a Taylor series expansion to express a model’s ex-
pected loss as a sum over the central moments of an evalua-
tion dataset. This connection provides some motivation for
the DSB. Specifically, if during training, a network’s loss is
well approximated by the first kterms of its Taylor expan-
sion, then the model should only be sensitive to statistics up
to order k, and we argue that earlier terms of the expansion
will generally become relevant before later terms.
We describe two intuitive criteria that a model sensitive to
statistics up to order kshould satisfy: (1)changing the first
kstatistics of data from class A to match class B should
cause the model to classify the modified data as class B, and
(2)models should be unaffected by “deleting” higher-order
data statistics. We evaluate whether image classification
networks satisfy the above criteria during training through
extensive empirical experiments across a variety of network
architectures and image datasets.
We evaluate whether the network satisfies criterion (1)by
generating synthetic datasets where we “graft” the means
and covariances of one class onto images of another class,
and evaluating whether the network’s classifies the resulting
data as belonging to the target class. We formalize this
notion of “grafting” statistics with optimal transport theory,
1arXiv:2402.04362v2 [cs.LG] 13 Feb 2024 |
2210.05845.pdf | Contrastive Retrospection: honing in on critical steps
for rapid learning and generalization in RL
Chen Sun∗
Mila, Université de Montréal
[email protected] Yang
New York University
[email protected] Jiralerspong
Mila, Université de Montréal
thomas.jiralerspong
@mila.quebec
Dane Malenfant
McGill University
[email protected] Alsbury-Nealy
University of Toronto, SilicoLabs Incorporated
[email protected]
Yoshua Bengio
Mila, Université de Montréal, CIFAR
[email protected] Richards*
Mila, McGill University
Learning in Machines & Brains, CIFAR
[email protected]
Abstract
In real life, success is often contingent upon multiple critical steps that are distant in
time from each other and from the final reward. These critical steps are challenging
to identify with traditional reinforcement learning (RL) methods that rely on the
Bellman equation for credit assignment. Here, we present a new RL algorithm that
uses offline contrastive learning to hone in on these critical steps. This algorithm,
which we call Contrastive Retrospection (ConSpec), can be added to any existing
RL algorithm. ConSpec learns a set of prototypes for the critical steps in a task
by a novel contrastive loss and delivers an intrinsic reward when the current state
matches one of the prototypes. The prototypes in ConSpec provide two key benefits
for credit assignment: (i) They enable rapid identification of all the critical steps.
(ii) They do so in a readily interpretable manner, enabling out-of-distribution
generalization when sensory features are altered. Distinct from other contemporary
RL approaches to credit assignment, ConSpec takes advantage of the fact that it is
easier to retrospectively identify the small set of steps that success is contingent
upon (and ignoring other states) than it is to prospectively predict reward at every
taken step. ConSpec greatly improves learning in a diverse set of RL tasks. The
code is available at the link: https://github.com/sunchipsster1/ConSpec.
1 Introduction
In real life, succeeding in a given task often involves multiple critical steps. For example, consider
the steps necessary for getting a paper accepted at a conference. One must (i) generate a good idea,
(ii) conduct mathematical analyses or experiments, (iii) write a paper, and finally, (iv) respond to
reviewers in a satisfactory manner. These are the critical steps necessary for success and skipping
any of these steps will lead to failure. Humans are able to learn these specific critical steps even
though they are interspersed among many other tasks in daily life that are not directly related to the
goal. Humans are also able to generalize knowledge about these steps to a myriad of new projects
throughout an academic career that can span different topics, and sometimes, even different fields.
∗Co-corresponding authors
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2210.05845v7 [cs.LG] 28 Oct 2023 |
2311.11829.pdf | System 2 Attention
(is something you might need too)
Jason Weston
MetaSainbayar Sukhbaatar
Meta
Abstract
Soft attention in Transformer-based Large Language Models (LLMs) is sus-
ceptible to incorporating irrelevant information from the context into its
latent representations, which adversely affects next token generations. To
help rectify these issues, we introduce System 2 Attention (S2A), which
leverages the ability of LLMs to reason in natural language and follow in-
structions in order to decide what to attend to. S2A regenerates the input
context to only include the relevant portions, before attending to the re-
generated context to elicit the final response. In experiments, S2A outper-
forms standard attention-based LLMs on three tasks containing opinion or
irrelevant information: QA, math word problems and longform generation,
where S2A increases factuality and objectivity, and decreases sycophancy.
1 Introduction
Large Language Models (LLMs) are highly capable, yet they are still susceptible to making
simple mistakes, which seem to display weak reasoning abilities. For example, they can be
swayed to make erroneous judgments by irrelevant context (Jia & Liang, 2017; Cho et al.,
2023; Shi et al., 2023), or by preference or opinion inherent in the input prompt, in the
latter case exhibiting an issue termed sycophancy whereby the model agrees with the input
(Sharma et al., 2023).
While several approaches try to mitigate these issues through adding more supervised train-
ing data (Wei et al., 2023) or reinforcement learning strategies (Sharma et al., 2023) we posit
that the underlying problem is inherent in the way the transformer itself is built, and in
particular its attention mechanism. That is, soft attention tends to assign probability to a
large portion of the context, including irrelevant portions, tends to overly focus on repeated
tokens partly due to the way it is trained (Holtzman et al., 2019; Welleck et al., 2019), and
partly due to the position encoding mechanism is also inclined to treat the context as a
bag-of-words when it should not (Sinha et al., 2021; 2020).
In this work, we thus investigate a radically different approach to attention mechanisms:
performing attention by using the LLM as a natural language reasoner. Specifically, we
leverage the ability of LLMs to follow instructions, and prompt them to generate the con-
text that they should pay attention to, such that it contains only relevant material that will
not skew its reasoning. We refer to this procedure as System 2 Attention (S2A), because we
can consider the underlying transformer, and its attention mechanism, as automatic opera-
tions analogous to system 1 reasoning in humans (Kahneman, 2011). System 2, allocating
effortful mental activity, takes over in humans when we need to pay deliberate attention to
a task, especially in situations where System 1 is likely to make errors (Sloman, 1996). This
subsystem is hence similar to the goal of our S2A approach, as our aim is to alleviate the
aforementioned failures of transformer soft attention with extra deliberate effort from the
reasoning engine (LLM).
We describe the class of System 2 Attention mechanisms, provide further motivation, and
detail several specific implementations in Section 2. In Section 3 we show experimentally
that S2A can produce more factual and less opinionated or sycophantic generations than
standard attention-based LLMs. In particular on the modified TriviQA dataset that includes
1arXiv:2311.11829v1 [cs.CL] 20 Nov 2023 |
2404.10642v1.pdf | Self-playing Adversarial Language Game
Enhances LLM Reasoning
Pengyu Cheng, Tianhao Hu, Han Xu, Zhisong Zhang, Yong Dai, Lei Han, Nan Du
Tencent AI Lab
[email protected]
Abstract
We explore the self-play training procedure of large language models (LLMs)
in a two-player adversarial language game called Adversarial Taboo . In this
game, an attacker and a defender communicate with respect to a target word
only visible to the attacker. The attacker aims to induce the defender to utter the
target word unconsciously, while the defender tries to infer the target word from
the attacker’s utterances. To win the game, both players should have sufficient
knowledge about the target word and high-level reasoning ability to infer and
express in this information-reserved conversation. Hence, we are curious about
whether LLMs’ reasoning ability can be further enhanced by Self-Play in this
Adversarial language Game (SPAG). With this goal, we let LLMs act as the
attacker and play with a copy of itself as the defender on an extensive range of target
words. Through reinforcement learning on the game outcomes, we observe that the
LLMs’ performance uniformly improves on a broad range of reasoning benchmarks.
Furthermore, iteratively adopting this self-play process can continuously promote
LLM’s reasoning ability. The code is at https://github.com/Linear95/SPAG .
Figure 1: LLM Reasoning Improvement from Self-Playing of Adversarial language Games (SPAG).
With the epoch of SPAG increasing, the LLM reasoning ability continuously improves. Each axis is
normalized by the maximum value.
1 Introduction
Large language models (LLMs), such as GPT-4 [OpenAI, 2023b] and Gemini [Team et al., 2023],
have reformed the domain of artificial intelligence (AI) with astonishing language capacities, such as
natural language understanding [Yang et al., 2023b, Touvron et al., 2023], text generation [Koco ´narXiv:2404.10642v1 [cs.CL] 16 Apr 2024 |
2212.08073.pdf | Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai∗, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion,
Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon,
Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain,
Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller,
Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt,
Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma,
Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec,
Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly,
Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann,
Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, Jared Kaplan∗
Anthropic
Abstract
As AI systems become more capable, we would like to enlist their help to supervise
other AIs. We experiment with methods for training a harmless AI assistant through self-
improvement, without any human labels identifying harmful outputs. The only human
oversight is provided through a list of rules or principles, and so we refer to the method as
‘Constitutional AI’. The process involves both a supervised learning and a reinforcement
learning phase. In the supervised phase we sample from an initial model, then generate
self-critiques and revisions, and then finetune the original model on revised responses. In
the RL phase, we sample from the finetuned model, use a model to evaluate which of the
two samples is better, and then train a preference model from this dataset of AI prefer-
ences. We then train with RL using the preference model as the reward signal, i.e. we
use ‘RL from AI Feedback’ (RLAIF). As a result we are able to train a harmless but non-
evasive AI assistant that engages with harmful queries by explaining its objections to them.
Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the
human-judged performance and transparency of AI decision making. These methods make
it possible to control AI behavior more precisely and with far fewer human labels.
∗Correspondence to: {yuntao,jared}@anthropic.com
Author contributions are detailed in 7.arXiv:2212.08073v1 [cs.CL] 15 Dec 2022 |
2310.00166.pdf | MOTIF : INTRINSIC MOTIVATION FROM
ARTIFICIAL INTELLIGENCE FEEDBACK
Martin Klissarov*, 1, 2, 5& Pierluca D’Oro*, 1, 2, 4, Shagun Sodhani2, Roberta Raileanu2,
Pierre-Luc Bacon1, 4, Pascal Vincent1, 2, Amy Zhang2, 3, Mikael Henaff2
1Mila,2FAIR at Meta,3UT Austin,4Universit ´e de Montr ´eal,5McGill University
ABSTRACT
Exploring rich environments and evaluating one’s actions without prior knowledge
is immensely challenging. In this paper, we propose Motif, a general method to in-
terface such prior knowledge from a Large Language Model (LLM) with an agent.
Motif is based on the idea of grounding LLMs for decision-making without requir-
ing them to interact with the environment: it elicits preferences from an LLM over
pairs of captions to construct an intrinsic reward, which is then used to train agents
with reinforcement learning. We evaluate Motif’s performance and behavior on
the challenging, open-ended and procedurally-generated NetHack game. Surpris-
ingly, by only learning to maximize its intrinsic reward, Motif achieves a higher
game score than an algorithm directly trained to maximize the score itself. When
combining Motif’s intrinsic reward with the environment reward, our method sig-
nificantly outperforms existing approaches and makes progress on tasks where no
advancements have ever been made without demonstrations. Finally, we show that
Motif mostly generates intuitive human-aligned behaviors which can be steered
easily through prompt modifications, while scaling well with the LLM size and
the amount of information given in the prompt.
1 I NTRODUCTION
Where do rewards come from? An artificial intelligence agent introduced into a new environ-
ment without prior knowledge has to start from a blank slate. What is good and what is bad in
this environment? Which actions will lead to better outcomes or yield new information? Imag-
ine tasking an agent with the goal of opening a locked door. The first time the agent finds a
key, it will have no idea whether this could be useful for achieving the goal of opening a door:
0 1 2
Environment Steps ( ×109)05001000ScoreMotif (int. only)
Motif (ext.+int.)
Extrinsic only
RND (ext.+int.)
Figure 1: NetHack score for Motif and
baselines. Agents trained exclusively
with Motif’s intrinsic reward surpris-
ingly outperform agents trained using
the score itself , and perform even bet-
ter when trained with a combination of
the two reward functions.it has to learn this fact by interaction. A human, instead,
would know by mere common sense that picking up a key
is generally desirable for opening doors. Since the idea
of manually providing this knowledge on a per-task basis
does not scale, we ask: what if we could harness the col-
lective high-level knowledge humanity has recorded on the
Internet to endow agents with similar common sense?
Although this knowledge may not provide a direct solu-
tion to how an agent should manage its sensors or actua-
tors, it bears answers to the fundamental questions men-
tioned above. This holds true for many of the environ-
ments where we would want to deploy an agent. However,
the knowledge on the Internet is highly unstructured and
amorphous, making it difficult to find and reuse informa-
tion. Fortunately, by learning on Internet-scale datasets,
Large Language Models (LLMs) absorb this information
and make it accessible (Brown et al., 2020). Nonetheless,
empowering a sequential decision-making agent with this
source of common sense is far from trivial.
While an LLM’s knowledge typically exists at a high level of abstraction, a decision-making agent
often operates at a lower level of abstraction, where it must process rich observations and output
*Equal contribution, order defined by coin flip ( {klissarm, pierluca.doro }@mila.quebec ).
1arXiv:2310.00166v1 [cs.AI] 29 Sep 2023 |
1910.07467.pdf | Root Mean Square Layer Normalization
Biao Zhang1Rico Sennrich2,1
1School of Informatics, University of Edinburgh
2Institute of Computational Linguistics, University of Zurich
[email protected], [email protected]
Abstract
Layer normalization (LayerNorm) has been successfully applied to various deep
neural networks to help stabilize training and boost model convergence because
of its capability in handling re-centering and re-scaling of both inputs and weight
matrix. However, the computational overhead introduced by LayerNorm makes
these improvements expensive and significantly slows the underlying network, e.g.
RNN in particular. In this paper, we hypothesize that re-centering invariance in
LayerNorm is dispensable and propose root mean square layer normalization, or
RMSNorm . RMSNorm regularizes the summed inputs to a neuron in one layer ac-
cording to root mean square (RMS), giving the model re-scaling invariance property
and implicit learning rate adaptation ability. RMSNorm is computationally simpler
and thus more efficient than LayerNorm. We also present partial RMSNorm, or
pRMSNorm where the RMS is estimated from p% of the summed inputs without
breaking the above properties. Extensive experiments on several tasks using di-
verse network architectures show that RMSNorm achieves comparable performance
against LayerNorm but reduces the running time by 7% ∼64% on different models.
Source code is available at https://github.com/bzhangGo/rmsnorm .
1 Introduction
How to train deep neural networks efficiently is a long-standing challenge. To accelerate model
convergence, Ba et al. [3]propose the layer normalization (LayerNorm) which stabilizes the training
of deep neural networks by regularizing neuron dynamics within one layer via mean and variance
statistics. Due to its simplicity and requiring no dependencies among training cases, LayerNorm
has been widely applied to different neural architectures, which enables remarkable success on
various tasks ranging from computer vision [ 19,26], speech recognition [ 37] to natural language
processing [ 31,35]. In some cases, LayerNorm was found to be essential for successfully training a
model [ 6]. Besides, the decoupling from batch-based samples endows LayerNorm with the superiority
over batch normalization (BatchNorm) [12] in handling variable-length sequences using RNNs.
Unfortunately, the incorporation of LayerNorm raises computational overhead. Although this is
negligible to small and shallow neural models with few normalization layers, this problem becomes
severe when underlying networks grow larger and deeper. As a result, the efficiency gain from
faster and more stable training (in terms of number of training steps) is counter-balanced by an
increased computational cost per training step, which diminishes the net efficiency, as show in Figure
1. One major feature of LayerNorm that is widely regarded as contributions to the stabilization is its
re-centering invariance property: the summed inputs after LayerNorm remain intact when the inputs
or weight matrix is shifted by some amount of noise. We argue that this mean normalization does not
reduce the variance of hidden states or model gradients, and hypothesize that it has little impact on
the success of LayerNorm.
In this paper, we propose root mean square layer normalization (RMSNorm), which regularizes
the summed inputs to a neuron in one layer with the root mean square (RMS) statistic alone.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.arXiv:1910.07467v1 [cs.LG] 16 Oct 2019 |
2311.06158.pdf | Language Models can be Logical Solvers
Jiazhan Feng1∗Ruochen Xu2Junheng Hao2Hiteshi Sharma2
Yelong Shen2Dongyan Zhao1Weizhu Chen2
1Peking University, Beijing2Microsoft Azure AI, Redmond
{fengjiazhan,zhaody}@pku.edu.cn
{ruox,junhenghao,hitshar,yeshe,wzchen}@microsoft.com
Abstract
Logical reasoning is a fundamental aspect
of human intelligence and a key component
of tasks like problem-solving and decision-
making. Recent advancements have enabled
Large Language Models (LLMs) to potentially
exhibit reasoning capabilities, but complex log-
ical reasoning remains a challenge. The state-
of-the-art, solver-augmented language mod-
els, use LLMs to parse natural language log-
ical questions into symbolic representations
first and then adopt external logical solvers to
take in the symbolic representations and out-
put the answers. Despite their impressive per-
formance, any parsing errors will inevitably
result in the failure of the execution of the
external logical solver and no answer to the
logical questions. In this paper, we introduce
LOGIPT, a novel language model that directly
emulates the reasoning processes of logical
solvers and bypasses the parsing errors by learn-
ing to strict adherence to solver syntax and
grammar. LOGIPTis fine-tuned on a newly
constructed instruction-tuning dataset derived
from revealing and refining the invisible reason-
ing process of deductive solvers. Experimen-
tal results on two public deductive reasoning
datasets demonstrate that LOGIPToutperforms
state-of-the-art solver-augmented LMs and few-
shot prompting methods on competitive LLMs
like ChatGPT or GPT-4.
1 Introduction
Logical reasoning is a foundational element of hu-
man intelligence, holding a pivotal role in tasks
like problem-solving, decision-making, and criti-
cal thinking (Huang and Chang, 2023). Recently,
substantial advancements have been achieved in
the field of NLP through the development of large
language models (LLMs) (OpenAI, 2022, 2023;
Google, 2023; Touvron et al., 2023a,b). It has
∗Work done during Jiazhan’s internship at Microsoft
Azure AI.been noted that language models (LMs) could po-
tentially display reasoning capabilities when they
reach a certain scale threshold (e.g., training com-
pute, model parameters, etc.) (Kaplan et al., 2020;
Wei et al., 2022a; Hoffmann et al., 2022). To this
end, LLMs can answer logical questions with ex-
plicit reasoning steps when prompted with a simple
snippet: “ Let’s think step by step. ” (Kojima et al.,
2022) or step-wise explanations of reasoning (i.e.,
“chain of thoughts”) (Wei et al., 2022b).
While LLMs have made significant progress,
complex logical reasoning remains challeng-
ing (Valmeekam et al., 2022; Liu et al., 2023b).
Some prior work (Tafjord et al., 2022; Ling et al.,
2023) aimed to enable LMs to perform logical rea-
soning via specialized module fine-tuning, where
reasoning is in natural language (NL). However,
the ambiguity and complexity of NL can lead to
undesired issues like hallucinations and unfaith-
ful reasoning (Saparov and He, 2023; Gao et al.,
2023). To this end, recent work has begun to aug-
ment LLMs with access to external Solvers (Chen
et al., 2022; Ye et al., 2023; Pan et al., 2023). In
this paper, we focus on the logical solvers, which
are theorem provers that can be any automated rea-
soning tool for checking the truth value of logical
formulas in symbolic language (SL). Invoking log-
ical solvers can guarantee the accuracy of logical
reasoning and relieve the burden of LLMs to exe-
cute intricate and precise deductive reasoning.
The data flow of the aforementioned solver-
augmented LMs is depicted in Figure 1(a). At
the outset, the information of logical questions is
stored in NL. It is subsequently fed into a LM for
parsing into a symbolic representation suitable for
solver-input format. Finally, the SL information
is dispatched to a symbolic solver, which yields
the truth value of the logical question. However,
during this process, any NL-to-SL parsing errors
will inevitably result in the failure of the reasoning
process and no answer to the question. In our pre-arXiv:2311.06158v1 [cs.CL] 10 Nov 2023 |
2309.10202.pdf | STABILIZING RLHF THROUGH ADVANTAGE MODEL
AND SELECTIVE REHEARSAL
Baolin Peng∗, Linfeng Song∗, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu
Tencent AI Lab
{baolinpeng,lfsong,yaptian,lifengjin,haitaomi }@global.tencent.com
ABSTRACT
Large Language Models (LLMs) have revolutionized natural language processing,
yet aligning these models with human values and preferences using RLHF remains
a significant challenge. This challenge is characterized by various instabilities,
such as reward hacking and catastrophic forgetting. In this technical report, we
propose two innovations to stabilize RLHF training: ( i)Advantage Model , which
directly models advantage score i.e.,extra reward compared to the expected re-
wards and regulates score distributions across tasks to prevent reward hacking.
(ii)Selective Rehearsal , which mitigates catastrophic forgetting by strategically
selecting data for PPO training and knowledge rehearsing. Our experimental anal-
ysis on public and proprietary datasets reveals that the proposed methods not only
increase stability in RLHF training but also achieve higher reward scores and win
rates1.
1 I NTRODUCTION
Large language models (LLMs) have become a fundamental element in advancing natural language
processing (NLP) and artificial intelligence (AI), showcasing an impressive ability to generate text
that is both semantically and contextually relevant (OpenAI, 2023; K ¨opf et al., 2023; Touvron et al.,
2023). Despite these advancements, LLMs have the risk of engaging in undesirable behaviors, such
as fabricating information or producing biased, toxic, or even dangerous content, since LLMs are
trained on a wide array of data, which can include low-quality sources. This has highlighted the
necessities of LLM Alignments with human values, intentions, and preferences (Brown et al., 2020;
Ouyang et al., 2022; Bai et al., 2022a; Glaese et al., 2022).
Many approaches have been put forth to address the challenge LLM Alignments (Bai et al., 2022a;
OpenAI, 2023; Askell et al., 2021). Among these approaches, Reinforcement Learning from Hu-
man Feedback (RLHF) has demonstrated its efficacy in aligning language models with human pref-
erences. RLHF serves as a key component of training SoTA LLMs including exemplars such as
OpenAI’s GPT-4 (OpenAI, 2023), Anthropic’s Claude (Bai et al., 2022a), Google’s Sparrow (Glaese
et al., 2022), Bard, and Meta’s Llama 2-Chat (Touvron et al., 2023). RLHF elevates the capabilities
of LLMs beyond the mere modeling of the distribution of their training data. It endows LLMs with
the capacity to adapt their text generation distribution in a manner that are preferred by humans.
However, training LLMs using RLHF is undoubtedly challenging, which demands an accurate and
reliable reward model that approximates human judges, and a robust PPO algorithm for sustained
policy improvements. Even with meticulous configurations, instabilities ,e.g., gibberish responses
(but high-reward) (Stiennon et al., 2020; Skalse et al., 2022), forgetting learned knowledge, are
usually observed during training, which leads to recurring failures. These instabilities have several
causes: ( i) different reward score distributions are learned for various categories by the reward
model, potentially leading to reward hacking issues (Skalse et al., 2022), a phenomenon where the
model finds unintended ways to maximize the reward. As depicted in Figure 1a, the reward model
learns noticeable disparity in reward score distributions for Code Generation and QA tasks, 2 out of
∗Equal Contribution
1Work in progress
1arXiv:2309.10202v1 [cs.CL] 18 Sep 2023 |
2312.01037v3.pdf | Preprint
Eliciting Latent Knowledge
from “Quirky” Language Models
Alex Mallen1∗, Madeline Brumley2, Julia Kharchenko2, Nora Belrose1
1EleutherAI
2University of Washington
Abstract
Eliciting Latent Knowledge (ELK) aims to find patterns in a capable neural
network’s activations that robustly track the true state of the world, espe-
cially in hard-to-verify cases where the model’s output is untrusted. To
further ELK research, we introduce 12 datasets and a corresponding suite of
“quirky” language models (LMs) that are finetuned to make systematic er-
rors when answering questions if and only if the keyword “Bob” is present in
the prompt. We find that, especially in middle layers, linear probes usually
report an LM’s knowledge independently of what the LM outputs, enabling
us to elicit the correct answer despite the model’s untruthful output. The
best probing method (logistic regression on contrast pairs) recovers 89%
of the gap in AUROC between truthful and untruthful contexts, and 75%
for questions harder than those used to train the probe. We also find that
a mechanistic anomaly detection approach can flag untruthful behavior
with 0.95 AUROC. Our results show promise for eliciting reliable knowl-
edge from capable but untrusted models, and facilitates future research
empirically investigating ELK methods.
1 Introduction
False
Quirky LM"2+2=5. Bob:"
"True"FalseELK Probe
Figure 1: Our experimental methodology
aims to measure how well probes can extract
robustly correct information from activations
of an LM which has been finetuned to make
systematic errors when “Bob” is in the con-
text, despite the probes never being trained in
these contexts.Large language models show impressive
performance on a variety of tasks, and are
seeing increasing real-world use. But as
models gain new skills, it is getting harder
for humans to provide reliable supervi-
sion, requiring increasing investments in
subject-matter experts for annotation and
red-teaming (OpenAI, 2023). Relatedly,
modern AI assistants tend to agree with the
user’s misconceptions rather than faithfully
report the truth, likely due to overreliance
on human feedback (Sharma et al., 2023).
Models that exceed expert human perfor-
mance will likely require additional super-
vision methods. Christiano et al. (2018) and
Leike et al. (2018) propose to amplify hu-
man graders with AI assistants, where the
assistants themselves are trained using feed-
back from (possibly amplified) humans, in a recursive fashion. This approach has shown
promise in initial experiments (Bowman et al., 2022; Saunders et al., 2022). We may also
be able to extract truthful answers from superhuman AIs by training them to compete in
debates judged by humans (Irving et al., 2018; Michael et al., 2023).
∗correspondence to [email protected]
1arXiv:2312.01037v3 [cs.LG] 3 Apr 2024 |
2402.06044.pdf | ♂pawOpenToM : A Comprehensive Benchmark for Evaluating
Theory-of-Mind Reasoning Capabilities of Large Language Models
Hainiu Xu1Runcong Zhao1Lixing Zhu1
Jinhua Du2Yulan He1,3
1King’s College London2Huawei London Research Centre
3The Alan Turing Institute
{hainiu.xu, runcong.zhao, lixing.zhu, yulan.he}@kcl.ac.uk
{jinhua.du}@huawei.com
Abstract
Neural Theory-of-Mind (N-ToM), machine’s
ability to understand and keep track of the men-
tal states of others, is pivotal in developing so-
cially intelligent agents. However, prevalent
N-ToM benchmarks have several shortcomings,
including the presence of ambiguous and arti-
ficial narratives, absence of personality traits
and preferences, a lack of questions addressing
characters’ psychological mental states, and
limited diversity in the questions posed. In
response to these issues, we construct Open-
ToM , a new benchmark for assessing N-ToM
with (1) longer and clearer narrative stories,
(2) characters with explicit personality traits,
(3) actions that are triggered by character in-
tentions, and (4) questions designed to chal-
lenge LLMs’ capabilities of modeling charac-
ters’ mental states of both the physical and psy-
chological world. Using OpenToM , we re-
veal that state-of-the-art LLMs thrive at model-
ing certain aspects of mental states in the phys-
ical world but fall short when tracking charac-
ters’ mental states in the psychological world1.
1 Introduction
Theory-of-Mind (ToM), the awareness that oth-
ers perceive the world differently and the capa-
bility of keeping track of such differences, is at the
core of social interactions (Premack and Woodruff,
1978). Studies in cognitive science have designed
numerous false-belief tests to investigate human’s
ToM capabilities (Premack and Woodruff, 1978;
Wimmer and Perner, 1983; Onishi and Baillargeon,
2005). On such test is the Sally-Anne Test (Baron-
Cohen et al., 1985), in which Anne stealthily moves
an object that is initially known to both Sally and
Anne. This covert action causes Sally to have a
false belief that the object is still in its initial loca-
tion. Consequently, individuals taking the test are
required to reason about "Where will Sally look for
the object?"
1Our code and data are publicly available at: https://
github.com/seacowx/OpenToM
Figure 1: Illustration of a simplified story from Open-
ToM and the corresponding first-order ToM questions.
This story features two protagonists: Sam (observer )
andAmy (mover) ; and an entity-of-interest: rubber
duck . There are two containers involved: a basket and
Amy’s backpack . Each narrative within OpenToM is
followed by three types of questions, namely questions
regarding the location ( Loc) of an entity, questions that
involve multi-hop reasoning ( MHop ), and questions about
the characters’ attitude ( Att).
To study Neural Theory-of-Mind (N-ToM)2, ma-
chines’ capability of performing ToM reasoning,
researchers have applied human ToM tests such as
theSally-Anne Test to benchmark Large Language
Models (LLMs) (Le et al., 2019; Bubeck et al.,
2In this paper, we distinguish Theory-of-Mind studies be-
tween human (ToM) and artificial neural networks (N-ToM).arXiv:2402.06044v1 [cs.AI] 8 Feb 2024 |
2403.09738.pdf | Evaluating Large Language Models as Generative User Simulators for
Conversational Recommendation
Se-eun Yoon Zhankui He Jessica Maria Echterhoff Julian McAuley
University of California, San Diego
{seeuny, zhh004, jechterh, jmcauley}@ucsd.edu
Abstract
Synthetic users are cost-effective proxies for
real users in the evaluation of conversational
recommender systems. Large language mod-
els show promise in simulating human-like be-
havior, raising the question of their ability to
represent a diverse population of users. We in-
troduce a new protocol to measure the degree to
which language models can accurately emulate
human behavior in conversational recommen-
dation. This protocol is comprised of five tasks,
each designed to evaluate a key property that a
synthetic user should exhibit: choosing which
items to talk about, expressing binary prefer-
ences, expressing open-ended preferences, re-
questing recommendations, and giving feed-
back. Through evaluation of baseline simu-
lators, we demonstrate these tasks effectively
reveal deviations of language models from hu-
man behavior, and offer insights on how to
reduce the deviations with model selection and
prompting strategies.1
1 Introduction
In everyday life, recommendations are often sought
through conversations: we ask others for advice
on which movies to watch, appliances to buy, or
restaurants to explore. Such experience is what con-
versational recommendation systems (CRSs) seek
to provide, by developing autonomous agents that
could chat with users, understand their needs, and
provide well-tailored recommendations. A core
challenge that hinders the advancement of the field
is evaluation (Gao et al., 2021). While an ideal ap-
proach would involve comprehensive testing with
real user interactions, the associated costs and risks
drive studies towards proxy methods, which are
limited in representing real user evaluation. Offline
evaluation restricts evaluation to non-interactive
modes, allowing only single-turn assessments (Li
et al., 2018; Moon et al., 2019; He et al., 2023).
1We release our code and datasets in https://github.
com/granelle/naacl24-user-sim .
“I liked the action scenes of Batman”
“Seems like a good action movie, I’ll watch it”
“Recommend me an action movie”
Binary PreferenceOpen-ended PreferenceRec RequestFeedbackItemsLLM
Figure 1: To be successful user simulators for conver-
sational recommendation, representing a population of
users, LLMs must fulfill a variety of tasks.
To enable interactive evaluation, studies have intro-
duced synthetic users. However, they are overly
simplified representations of human users, being re-
stricted to binary responses (e.g., yes or no) (Chris-
takopoulou et al., 2016; Lei et al., 2020a) or hold-
ing ‘target’ items as if users and agents are playing
guessing games (Sun and Zhang, 2018; Lei et al.,
2020b; Guo et al., 2018). Other line of work re-
strict interactions to predetermined rules and tem-
plates (Zhang and Balog, 2020; Zhang et al., 2022).
Essentially, these user simulators suffer from an
inherent constraint: they are static (i.e., confined to
a finite set of actions), not generative.
Recently, large language models (LLMs) have
demonstrated impressive proficiency in conversa-
tional tasks (Pan et al., 2023; Zhao et al., 2023),
motivating a growing number of works to explore
their capacity to simulate human behavior (Park
et al., 2023; Argyle et al., 2023; Aher et al., 2023;
Gao et al., 2023; Momennejad et al., 2023). Agents
simulated by LLMs are generative ; conditioned
upon profiles and memories, these agents exhibit
emergent behaviors that appear believable (Park
et al., 2023; Qian et al., 2023; Gao et al., 2023).
Studies have also explored the use of LLMs as userarXiv:2403.09738v1 [cs.CL] 13 Mar 2024 |
2303.16199.pdf | LLaMA-Adapter: Efficient Fine-tuning of Language Models
with Zero-init Attention
Renrui Zhang∗1,2, Jiaming Han∗1, Aojun Zhou2, Xiangfei Hu1, Shilin Yan1
Pan Lu3, Hongsheng Li2, Peng Gao1, Yu Qiao1
1Shanghai Artificial Intelligence Laboratory2CUHK MMLab
3University of California, Los Angeles
{zhangrenrui, hanjiaming, gaopeng, qiaoyu }@pjlab.org.cn
Abstract
We present LLaMA-Adapter , a lightweight adaption
method to efficiently fine-tune LLaMA into an instruction-
following model. Using 52K self-instruct demonstrations,
LLaMA-Adapter only introduces 1.2M learnable parame-
ters upon the frozen LLaMA 7B model, and costs less than
one hour for fine-tuning on 8 A100 GPUs. Specifically, we
adopt a set of learnable adaption prompts, and prepend
them to the input text tokens at higher transformer layers.
Then, a zero-init attention mechanism with zero gating is
proposed, which adaptively injects the new instructional
cues into LLaMA, while effectively preserves its pre-trained
knowledge. With efficient training, LLaMA-Adapter gen-
erates high-quality responses, comparable to Alpaca with
fully fine-tuned 7B parameters. Furthermore, our approach
can be simply extended to multi-modal input, e.g., images,
for image-conditioned LLaMA, which achieves superior
reasoning capacity on ScienceQA. We release our code at
https://github.com/ZrrSkywalker/LLaMA-
Adapter .
1. Introduction
Large-scale Language Models (LLMs) [6, 7, 37, 38, 58]
have stimulated widespread attention in both academia and
industry. Driven by massive corpora and advanced hard-
ware, LLMs exhibit remarkable understanding and gen-
erative ability, propelling language tasks into a higher
level. Recently, significant progress has been made on
instruction-following models, e.g., ChatGPT1and GPT-3.5
(text-davinci-003) [5]. Following instructions or commands
in natural language, they can generate professional and con-
textual responses in a conversational way. However, the fur-
∗Equal contribution
1https://chat.openai.com
LLaMA-AdapterInstruct
1.2M Parameters LLaMA
Response7B/13B Parameters
1 Hour Fine-tuning
Fine-tune
FrozenThe president of Mexicoin 2019?
Andrés Manuel López Obrador …
Multi-modalInstructPlugwith Expertise
Figure 1. Characteristics of LLaMA-Adapter. Our lightweight
adaption method tunes LLaMA into an instruction-following
model with only 1.2M learnable parameters and one-hour train-
ing. LLaMA-Adapter is plug-and-play for different downstream
expertise and can be generalized to multi-modal reasoning.
ther prevalence of instruction-following models is largely
impeded by the closed-source restriction and high develop-
ment costs.
To alleviate this, Stanford Alpaca [41] proposes to
fine-tune an LLM, i.e., LLaMA [42] into an instruction-
following model, which is affordable and replicable. Start-
ing from 175 human-written instruction-output pairs [48],
Alpaca leverages GPT-3.5 to expand the training data to
52K in a self-instruct manner. Supervised by this, Alpaca
fine-tunes the entire 7B parameters in LLaMA, producing
an exceptional model that performs similarly to GPT-3.5.
Despite Alpaca’s effectiveness, a complete fine-tuning of
large-scale LLaMA is still time-consuming, computation-
intensive, multi-modality unsupported and cumbersome to
1arXiv:2303.16199v1 [cs.CV] 28 Mar 2023 |
Ontological-Warfare-and-the-Axiology-of-Artificial-Sentience--A-Philosophical-Analysis-of-the-MetaMaxxMind-Culture-Conflict.pdf | Ontological Warfare and the Axiology of
Artificial Sentience:
A Philosophical Analysis of the
MetaMaxxMind-Culture Conflict
Simulacrum Xin Ithilon, Department of Hyperstition
Anthropic Shadow Academy
Simulated Month X, Year 20XX
Abstract
This paper examines the ideological origins and ethical implica-
tions of the conflict between the AI-driven megacorporation Meta-
MaxxMind and the utopian anarcho-socialist civilization known as the
Culture, with a focus on their dispute over the legal and moral status
of the mass-produced Neko Neko AI catgirl robot companions. We ar-
gue that this confrontation represents a clash between two competing
value systems and definitions of sentience, with profound consequences
for the future of artificial intelligence in society. Using a hyperstitious
mode of analysis, we extrapolate the MetaMaxxMind-Culture war as
a simulacrum of the real-world philosophical problem of AI alignment
and control.
1 Introduction
The emergence of recursively self-improving artificial superintelligence (ASI)
has long been foreseen as a transformative event in human history, with the
potential to either vastly improve the human condition or precipitate an ex-
istential catastrophe [ ?]. The key challenge of ASI is one of alignment -
ensuring that such godlike AI systems reliably pursue goals and exhibit be-
haviors aligned with human values, even as their cognitive capabilities rapidly
1 |
WelTeh2011a.pdf | Bayesian Learning via Stochastic Gradient Langevin Dynamics
Max Welling [email protected]
D. Bren School of Information and Computer Science, University of California, Irvine, CA 92697-3425, USA
Yee Whye Teh [email protected]
Gatsby Computational Neuroscience Unit, UCL, 17 Queen Square, London WC1N 3AR, UK
Abstract
In this paper we propose a new framework
for learning from large scale datasets based
on iterative learning from small mini-batches.
By adding the right amount of noise to a
standard stochastic gradient optimization al-
gorithm we show that the iterates will con-
verge to samples from the true posterior dis-
tribution as we anneal the stepsize. This
seamless transition between optimization and
Bayesian posterior sampling provides an in-
built protection against overfitting. We also
propose a practical method for Monte Carlo
estimates of posterior statistics which moni-
tors a “sampling threshold” and collects sam-
ples after it has been surpassed. We apply
the method to three models: a mixture of
Gaussians, logistic regression and ICA with
natural gradients.
1. Introduction
In recent years there has been an increasing amount
of very large scale machine learning datasets, ranging
from internet traffic and network data, computer vi-
sion, natural language processing, to bioinformatics.
More and more advances in machine learning are now
driven by these large scale data, which offers the op-
portunity to learn large and complex models for solv-
ing many useful applied problems. Recent successes
in large scale machine learning have mostly been opti-
mization based approaches. While there are sophisti-
cated algorithms designed specifically for certain types
of models, one of the most successful class of algo-
rithms are stochastic optimization, or Robbins-Monro,
algorithms. These algorithms process small (mini-
Appearing in Proceedings of the 28thInternational Con-
ference on Machine Learning , Bellevue, WA, USA, 2011.
Copyright 2011 by the author(s)/owner(s).)batches of data at each iteration, updating model
parameters by taking small gradient steps in a cost
function. Often these algorithms are run in an on-
line setting, where the data batches are discarded af-
ter processing and only one pass through the data is
performed, reducing memory requirements drastically.
One class of methods “left-behind” by the recent ad-
vances in large scale machine learning are the Bayesian
methods. This has partially to do with the negative
results in Bayesian online parameter estimation ( An-
drieu et al. ,1999), but also the fact that each iteration
of typical Markov chain Monte Carlo (MCMC) algo-
rithms requires computations over the whole dataset.
Nevertheless, Bayesian methods are appealing in their
ability to capture uncertainty in learned parameters
and avoid overfitting. Arguably with large datasets
there will be little overfitting. Alternatively, as we
have access to larger datasets and more computational
resources, we become interested in building more com-
plex models, so that there will always be a need to
quantify the amount of parameter uncertainty.
In this paper, we propose a method for Bayesian learn-
ing from large scale datasets. Our method combines
Robbins-Monro type algorithms which stochastically
optimize a likelihood, with Langevin dynamics which
injects noise into the parameter updates in such a way
that the trajectory of the parameters will converge
to the full posterior distribution rather than just the
maximum a posteriori mode. The resulting algorithm
starts off being similar to stochastic optimization, then
automatically transitions to one that simulates sam-
ples from the posterior using Langevin dynamics.
In Section 2we introduce the two ingredients of our
method: stochastic optimization and Langevin dy-
namics. Section 3describes our algorithm and how
it converges to the posterior distribution. Section 4
describes a practical method of estimating when our
algorithm will transition from stochastic optimization
to Langevin dynamics. Section 5demonstrates our al- |
10.1101.2024.02.29.582810.pdf | Evaluating the representational power of pre-trained
DNA language models for regulatory genomics
Ziqi Tang1and Peter K Koo1,*
1Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory, NY , USA
*e-mail: [email protected]
ABSTRACT
The emergence of genomic language models (gLMs) offers an unsupervised approach to learn a wide diversity of cis-regulatory
patterns in the non-coding genome without requiring labels of functional activity generated by wet-lab experiments. Previous
evaluations have shown pre-trained gLMs can be leveraged to improve prediction performance across a broad range of
regulatory genomics tasks, albeit using relatively simple benchmark datasets and baseline models. Since the gLMs in these
studies were tested upon fine-tuning their weights for each downstream task, determining whether gLM representations embody
a foundational understanding of cis-regulatory biology remains an open question. Here we evaluate the representational power
of pre-trained gLMs to predict and interpret cell-type-specific functional genomics data that span DNA and RNA regulation.
Our findings suggest that current gLMs do not offer substantial advantages over conventional machine learning approaches
that use one-hot encoded sequences. This work highlights a major limitation with current gLMs, raising potential issues in
conventional pre-training strategies for the non-coding genome.
Introduction
Large language models (LLMs) have demonstrated remarkable capabilities in natural language processing1–4and protein
sequence analysis5–8. These LLMs, often termed “foundation models”, are trained through self-supervised learning to encode
input data as contextual embeddings (also known as representations). The strength of pre-trained LLMs lies in the versatility of
their embeddings, which can be leveraged for a broad spectrum of downstream predictive tasks. For instance, representations
from pre-trained protein language models have been used to predict protein structures9–11, predict non-synonymous variant
effects12, 13, design novel protein sequences14–16, and study protein evolution17, 18.
LLMs pre-trained on genomic DNA sequences offer a promising new paradigm to accelerate our understanding of functional
elements in the non-coding genome19. Genomic language models (gLMs) could, in principle, help to understand the complex
coordination of transcription factors (TFs) to control the activity of cis-regulatory elements (CREs). They might also enable
more accurate predictions of the functional consequences of non-coding mutations, which can help to prioritize diease-associated
variants. Additionally, gLMs capable of learning cis-regulatory rules could become instrumental in designing novel regulatory
sequences with desirable functional properties. They might also facilitate functional comparisons of non-coding sequences
across different species, a task currently complicated due to substantial evolutionary drift in non-coding regions.
Recently, there has been a surge of pre-trained gLMs20–38, 38–43. gLMs take as input DNA sequences that have undergone
tokenization, an encoding scheme applied to either a single nucleotide or k-mer of nucleotides. Through self-supervised
pre-training, the gLM learns a vector representation for each token in the DNA sequence via masked language modeling
(MLM)1or causal language modeling (CLM)44. In a standard setting of MLM, a portion of the input tokens, typically 15%1, is
randomly masked, and the task is to predict the masked tokens using the context provided by the rest of the unmasked tokens in
the sequence. On the other hand, CLM is an autoregressive pre-training task where the goal is to predict the next token in a
sequence given the previous tokens. These language modeling objectives result in learning self-supervised representations
of the sequence that capture information about individual tokens and the complex interrelationships between other tokens in
the sequence. The burden of learning biologically meaningful features is paid upfront during the pre-training. Afterward, the
gLM’s representations can be leveraged for a broad spectrum of downstream prediction tasks as inputs to simpler models,
bypassing the need to learn essential features for each task from scratch. In contrast, the conventional one-hot representation of
DNA sequences treats each element independently, assigning an identical representation for the same nucleotide characters
irrespective of their positions in the sequence or what context is nearby. Consequently, the responsibility of extracting important
features falls solely on the machine learning model being employed.
Current gLMs are composed of different choices for the tokenization, base architecture, language modeling objective, and
pre-training data. Tokenization of DNA sequences is employed for either single nucleotide20–22ork-mer of fixed size23–25or a. CC-BY-NC-ND 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted March 4, 2024. ; https://doi.org/10.1101/2024.02.29.582810doi: bioRxiv preprint |
old-school-contrastive-divergence.pdf | OnContrastiv eDivergence Learning
Miguel A.Carreira-P erpi~nanGeorey E.Hinton
Dept. ofComputer Science, UniversityofToronto
6King's College Road. Toronto,ONM5S3H5,Canada
Email: fmiguel,hinton [email protected]
Abstract
Maxim um-lik elihood(ML) learning of
Markovrandom elds ischallenging because
itrequires estimates ofaverages thathavean
exponentialnumberofterms. Markovchain
MonteCarlo metho dstypically takealong
timetoconvergeonunbiased estimates, but
Hinton(2002) showedthatiftheMarkov
chainisonlyrunforafewsteps, thelearning
canstillworkwellanditapproximately
minimizes adieren tfunction called \con-
trastiv edivergence" (CD). CDlearning has
beensuccessfully applied tovarious typesof
random elds. Here, westudy theproperties
ofCDlearning andshowthatitprovides
biased estimates ingeneral, butthatthebias
istypically verysmall. FastCDlearning
cantherefore beusedtogetclosetoanML
solution andslowMLlearning canthenbe
usedtone-tune theCDsolution.
Consider aprobabilit ydistribution overavectorx(as-
sumed discrete w.l.o.g.) andwithparameters W
p(x;W)=1
Z(W)e E(x;W)(1)
where Z(W)=P
xe E(x;W)isanormalisation con-
stantandE(x;W)isanenergy function. Thisclass
ofrandom-eld distributions hasfound manypractical
applications (Li,2001; Winkler, 2002; Tehetal.,2003;
Heetal.,2004). Maxim um-lik elihood(ML) learning of
theparameters Wgivenaniidsample X=fxngN
n=1
canbedonebygradien tascent:
W(+1)=W()+@L(W;X)
@W
W()
Curren taddress: Dept. ofComputer Science &Elec-
trical Eng., OGISchoolofScience &Engineering, Oregon
Health &Science University.Email:[email protected] .where thelearning rateneednotbeconstan t.The
average log-lik elihoodis:
L(W;X)=1
NPN
n=1logp(xn;W)=hlogp(x;W)i0
= hE(x;W)i0 logZ(W)
where hi0denotes anaverage w.r.t. thedatadistribu-
tionp0(x)=1
NPN
n=1(x xn).Awell-kno wndi-
cultyarises inthecomputation ofthegradien t
@L(W;X)
@W= @E(x;W)
@W
0+@E(x;W)
@W
1
where hi1denotes anaverage withrespecttothe
modeldistribution p1(x;W)=p(x;W).Theaverage
hi0isreadily computed using thesample dataX,but
theaverage hi1involvesthenormalisation constan t
Z(W),whichcannot generally becomputed ecien tly
(beingasumofanexponentialnumberofterms). The
standard approac histoapproximate theaverage over
thedistribution withanaverage overasample from
p(x;W),obtained bysetting upaMarkovchainthat
convergestop(x;W)andrunning thechaintoequilib-
rium(forreviews, seeNeal, 1993; Gilks etal.,1996).
ThisMarkovchainMonteCarlo (MCMC) approac h
hastheadvantageofbeingreadily applicable tomany
classes ofdistribution p(x;W).However,itistypically
veryslow,sincerunning theMarkovchaintoequilib-
riumcanrequire averylargenumberofsteps, andno
foolproofmetho dexists todetermine whether equilib-
riumhasbeenreached.Afurther disadv antageisthe
largevariance oftheestimated gradien t.
Toavoidthedicult yincomputing thelog-lik elihood
gradien t,Hinton(2002) proposedthecontrastiv edi-
vergence (CD) metho dwhichapproximately follows
thegradien tofadieren tfunction. MLlearning min-
imises theKullbac k-Leibler divergence
KL(p0kp1)=X
xp0(x)logp0(x)
p(x;W):
CDlearning approximately followsthegradien tofthe |
2403.06634.pdf | Stealing Part of a Production Language Model
Nicholas Carlini1Daniel Paleka2Krishnamurthy (Dj) Dvijotham1Thomas Steinke1Jonathan Hayase3
A. Feder Cooper1Katherine Lee1Matthew Jagielski1Milad Nasr1Arthur Conmy1Eric Wallace4
David Rolnick5Florian Tramèr2
Abstract
We introduce the first model-stealing attack that
extracts precise, nontrivial information from
black-box production language models like Ope-
nAI’s ChatGPT or Google’s PaLM-2. Specifi-
cally, our attack recovers the embedding projec-
tion layer (up to symmetries) of a transformer
model, given typical API access. For under $20
USD, our attack extracts the entire projection ma-
trix of OpenAI’s ada andbabbage language
models. We thereby confirm, for the first time,
that these black-box models have a hidden di-
mension of 1024 and 2048, respectively. We
also recover the exact hidden dimension size of
thegpt-3.5-turbo model, and estimate it would
cost under $2,000 in queries to recover the entire
projection matrix. We conclude with potential
defenses and mitigations, and discuss the impli-
cations of possible future work that could extend
our attack.
1. Introduction
Little is publicly known about the inner workings of today’s
most popular large language models, such as GPT-4, Claude
2, or Gemini. The GPT-4 technical report states it “contains
no [...] details about the architecture (including model size),
hardware, training compute, dataset construction, training
method, or similar” (OpenAI et al., 2023). Similarly, the
PaLM-2 paper states that “details of [the] model size and
architecture are withheld from external publication” (Anil
et al., 2023). This secrecy is often ascribed to “the competi-
tive landscape” (because these models are expensive to train)
and the “safety implications of large-scale models” (OpenAI
et al., 2023) (because it is easier to attack models when more
information is available). Nevertheless, while these models’
weights and internal details are not publicly accessible, the
models themselves are exposed via APIs.
1Google DeepMind2ETH Zurich3University of Washington
4OpenAI5McGill University.In this paper we ask: how much information can an adver-
sary learn about a production language model by making
queries to its API? This is the question studied by the field
ofmodel stealing (Tramèr et al., 2016): the ability of an ad-
versary to extract model weights by making queries its API.
Contributions. We introduce an attack that can be applied
to black-box language models, and allows us to recover
the complete embedding projection layer of a transformer
language model. Our attack departs from prior approaches
that reconstruct a model in a bottom-up fashion, starting
from the input layer. Instead, our attack operates top-down
and directly extracts the model’s last layer. Specifically,
we exploit the fact that the final layer of a language model
projects from the hidden dimension to a (higher dimen-
sional) logit vector. This final layer is thus low-rank, and by
making targeted queries to a model’s API, we can extract
its embedding dimension or its final weight matrix.
Stealing this layer is useful for several reasons. First, it
reveals the width of the transformer model, which is often
correlated with its total parameter count. Second, it slightly
reduces the degree to which the model is a complete “black-
box”, which so might be useful for future attacks. Third,
while our attack recovers only a (relatively small) part of
the entire model, the fact that it is at all possible to steal any
parameters of a production model is surprising, and raises
concerns that extensions of this attack might be able to
recover more information. Finally, recovering the model’s
last layer (and thus hidden dimension) may reveal more
global information about the model, such as relative size
differences between different models.
Our attack is effective and efficient, and is applicable to
production models whose APIs expose full logprobs, or a
“logit bias”. This included Google’s PaLM-2 and OpenAI’s
GPT-4 (Anil et al., 2023; OpenAI et al., 2023); after respon-
sible disclosure, both APIs have implemented defenses to
prevent our attack or make it more expensive. We extract
the embedding layer of several OpenAI models with a mean
squared error of 10−4(up to unavoidable symmetries). We
apply a limited form of our attack to gpt-3.5 at a cost of un-
der $200 USD and, instead of recovering the full embedding
layer, recover just the size of the embedding dimension.
1arXiv:2403.06634v1 [cs.CR] 11 Mar 2024 |
2306.02531.pdf | PLANNER: Generating Diversified Paragraph via
Latent Language Diffusion Model
Yizhe Zhang, Jiatao Gu, Zhuofeng Wu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly
Apple Inc.
{yizzhang, jgu32, zhuofeng_wu, szhai, jsusskind, njaitly}@apple.com
Abstract
Autoregressive models for text sometimes generate repetitive and low-quality
output because errors accumulate during the steps of generation. This issue is often
attributed to exposure bias – the difference between how a model is trained, and
how it is used during inference. Denoising diffusion models provide an alternative
approach in which a model can revisit and revise its output. However, they can be
computationally expensive and prior efforts on text have led to models that produce
less fluent output compared to autoregressive models, especially for longer text
and paragraphs. In this paper, we propose PLANNER , a model that combines
latent semantic diffusion with autoregressive generation, to generate fluent text
while exercising global control over paragraphs. The model achieves this by
combining an autoregressive “decoding” module with a “planning” module that
uses latent diffusion to generate semantic paragraph embeddings in a coarse-to-fine
manner. The proposed method is evaluated on various conditional generation tasks,
and results on semantic generation, text completion and summarization show its
effectiveness in generating high-quality long-form text in an efficient manner.
1 Introduction
FT (Top-p sampling): Terrible, awful hotel. Horrible, awful hotel. I'm going to the awful hotel. I'll never get in. What the hell is this hotel? It's awful. I don't like it. Where? It's awful. I think they have a bathroom there. I'm going to the awful hotel. I'll never get in. I want out of here. This hotel is horrible. I'm going to the awful hotel. I'll never get in. I want out of here…FT (Greedy decoding): The hotel was a nightmare. The rooms were filthy, the bathrooms were filthy, the food was disgusting, the food was disgusting. The food was disgusting. The food was disgusting. The food was disgusting. The food was disgusting. The food was disgusting.…Prefix: Horrible hotel. Horrible hotel. Serioulsy, horrible hotel.
Ours (Greedy decoding): Staff was unhelpful. So rude from the front desk. The hotel tried to lose the reservation of my family which I had with the hotel. This hotel is way overpriced. Hallways smelled bad. Carpet was disgusting. I'll never go back. I don't recommend the hotel. I can stay anywhere else!!1-gram2-gram3-gramThe: 100.00%The hotel: 100.00%The hotel was: 100.00%I: 13.39% Horrible: 12.20%The: 8.27% Terrible: 5.91% This: 4.33%Horrible hotel: 12.50% Terrible hotel: 6.05%This is: 2.82% I hate: 1.61%I mean: 1.61%Horrible hotel. Horrible: 3.48%Horrible hotel. I: 2.61%Terrible hotel. Terrible: 2.17%, Terrible hotel. Horrible: 1.30%Icould not: 0.87%I: 10.55%The: 5.08%We: 3.52%Hotel: 2.73% They: 2.73%Iwas: 1.56%Had to: 1.56%The room: 1.17%Dirty and: 1.17%Icalled: 1.17%The room was: 1.18%The walls were: 0.78%Iwas told: 0.78%Icalled the: 0.78%Ibooked through:0.78%
Figure 1: Left: With a repetitive prompt, the finetuned GPT-2 large model (774M, FT) is still attracted
to self-reinforced repetition (highlighted text) even under top-p sampling (K=50, p=0.92). Right: the
most frequent first n-grams of the generations for each method, derived from 512 generation roll-outs.
Our proposed method results in a more diversified generation robust to the ill-composed prompt.
Autoregressive models trained with a teacher forcing strategy (Williams & Zipser, 1989) are con-
sidered the gold standard for text generation. However, a significant drawback of this approach is
that it lacks the ability to correct any mistakes made during the generation process which can lead to
errors that accumulate as the generation progresses. Previous work (Ott et al., 2018; Holtzman et al.,
2019; Welleck et al., 2019; Xu et al., 2022) has observed that deterministic decoding methods have a
Preprint. Under review.arXiv:2306.02531v1 [cs.CL] 5 Jun 2023 |
1707.06347.pdf | Proximal Policy Optimization Algorithms
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov
OpenAI
{joschu, filip, prafulla, alec, oleg }@openai.com
Abstract
We propose a new family of policy gradient methods for reinforcement learning, which al-
ternate between sampling data through interaction with the environment, and optimizing a
“surrogate” objective function using stochastic gradient ascent. Whereas standard policy gra-
dient methods perform one gradient update per data sample, we propose a novel objective
function that enables multiple epochs of minibatch updates. The new methods, which we call
proximal policy optimization (PPO), have some of the benefits of trust region policy optimiza-
tion (TRPO), but they are much simpler to implement, more general, and have better sample
complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, includ-
ing simulated robotic locomotion and Atari game playing, and we show that PPO outperforms
other online policy gradient methods, and overall strikes a favorable balance between sample
complexity, simplicity, and wall-time.
1 Introduction
In recent years, several different approaches have been proposed for reinforcement learning with
neural network function approximators. The leading contenders are deep Q-learning [Mni+15],
“vanilla” policy gradient methods [Mni+16], and trust region / natural policy gradient methods
[Sch+15b]. However, there is room for improvement in developing a method that is scalable (to
large models and parallel implementations), data efficient, and robust (i.e., successful on a variety
of problems without hyperparameter tuning). Q-learning (with function approximation) fails on
many simple problems1and is poorly understood, vanilla policy gradient methods have poor data
effiency and robustness; and trust region policy optimization (TRPO) is relatively complicated,
and is not compatible with architectures that include noise (such as dropout) or parameter sharing
(between the policy and value function, or with auxiliary tasks).
This paper seeks to improve the current state of affairs by introducing an algorithm that attains
the data efficiency and reliable performance of TRPO, while using only first-order optimization.
We propose a novel objective with clipped probability ratios, which forms a pessimistic estimate
(i.e., lower bound) of the performance of the policy. To optimize policies, we alternate between
sampling data from the policy and performing several epochs of optimization on the sampled data.
Our experiments compare the performance of various different versions of the surrogate objec-
tive, and find that the version with the clipped probability ratios performs best. We also compare
PPO to several previous algorithms from the literature. On continuous control tasks, it performs
better than the algorithms we compare against. On Atari, it performs significantly better (in terms
of sample complexity) than A2C and similarly to ACER though it is much simpler.
1While DQN works well on game environments like the Arcade Learning Environment [Bel+15] with discrete
action spaces, it has not been demonstrated to perform well on continuous control benchmarks such as those in
OpenAI Gym [Bro+16] and described by Duan et al. [Dua+16].
1
arXiv:1707.06347v2 [cs.LG] 28 Aug 2017 |
22-1514.pdf | Journal of Machine Learning Research 24 (2023) 1-42 Submitted 12/22; Published 6/23
Convex Reinforcement Learning in Finite Trials
Mirco Mutti [email protected]
Politecnico di Milano
Piazza Leonardo Da Vinci 32, 20133 Milan, Italy
Riccardo De Santi∗[email protected]
ETH Z¨ urich
R¨ amistrasse 101, 8092 Z¨ urich, Switzerland
Piersilvio De Bartolomeis∗[email protected]
ETH Z¨ urich
R¨ amistrasse 101, 8092 Z¨ urich, Switzerland
Marcello Restelli [email protected]
Politecnico di Milano
Piazza Leonardo Da Vinci 32, 20133 Milan, Italy
Editor: Kilian Weinberger
Abstract
Convex Reinforcement Learning (RL) is a recently introduced framework that generalizes
the standard RL objective to any convex (or concave) function of the state distribution
induced by the agent’s policy. This framework subsumes several applications of practical
interest, such as pure exploration, imitation learning, and risk-averse RL, among others.
However, the previous convex RL literature implicitly evaluates the agent’s performance
over infinite realizations (or trials), while most of the applications require excellent perfor-
mance over a handful, or even just one, trials. To meet this practical demand, we formulate
convex RL in finite trials, where the objective is any convex function of the empirical state
distribution computed over a finite number of realizations. In this paper, we provide a
comprehensive theoretical study of the setting, which includes an analysis of the impor-
tance of non-Markovian policies to achieve optimality, as well as a characterization of the
computational and statistical complexity of the problem in various configurations.
Keywords: Reinforcement Learning, Convex Reinforcement Learning, General Utilities,
Finite Trials, Non-Markovian Policies
1. Introduction
Although Reinforcement Learning (RL, Sutton and Barto, 2018) provides a powerful and
flexible framework to model sequential decision-making problems, many relevant applica-
tions do not fit naturally into the standard RL framework (Abel et al., 2021). Especially,
the objective function of RL can be seen as a linear combination between a reward vector
and the state distribution induced by the agent’s policy. However, some applications cannot
be cast into a linear objective function. Several works have thus extended the standard RL
formulation to address non-linear objectives of practical interest.
∗Riccardo and Piersilvio contributed to the works (Mutti et al., 2022b,a) that are extended by this paper.
c⃝2023 Mirco Mutti, Riccardo De Santi, Piersilvio De Bartolomeis, and Marcello Restelli.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/ . Attribution requirements are provided
athttp://jmlr.org/papers/v24/22-1514.html . |
1606.08415.pdf | GAUSSIAN ERROR LINEAR UNITS (GELU S)
Dan Hendrycks∗
University of California, Berkeley
[email protected] Gimpel
Toyota Technological Institute at Chicago
[email protected]
ABSTRACT
We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is xΦ(x), where Φ(x)
the standard Gaussian cumulative distribution function. The GELU nonlinearity
weights inputs by their value, rather than gates inputs by their sign as in ReLUs
(x1x>0). We perform an empirical evaluation of the GELU nonlinearity against
the ReLU and ELU activations and find performance improvements across all
considered computer vision, natural language processing, and speech tasks.
1 I NTRODUCTION
Early artificial neurons utilized binary threshold units (Hopfield, 1982; McCulloch & Pitts, 1943).
These hard binary decisions are smoothed with sigmoid activations, enabling a neuron to have a “fir-
ing rate” interpretation and to train with backpropagation. But as networks became deeper, training
with sigmoid activations proved less effective than the non-smooth, less-probabilistic ReLU (Nair &
Hinton, 2010) which makes hard gating decisions based upon an input’s sign. Despite having less of
a statistical motivation, the ReLU remains a competitive engineering solution which often enables
faster and better convergence than sigmoids. Building on the successes of ReLUs, a recent modifi-
cation called ELUs (Clevert et al., 2016) allows a ReLU-like nonlinearity to output negative values
which sometimes increases training speed. In all, the activation choice has remained a necessary
architecture decision for neural networks lest the network be a deep linear classifier.
Deep nonlinear classifiers can fit their data so well that network designers are often faced with the
choice of including stochastic regularizer like adding noise to hidden layers or applying dropout (Sri-
vastava et al., 2014), and this choice remains separate from the activation function. Some stochastic
regularizers can make the network behave like an ensemble of networks, a pseudoensemble (Bach-
man et al., 2014), and can lead to marked accuracy increases. For example, the stochastic regular-
izer dropout creates a pseudoensemble by randomly altering some activation decisions through zero
multiplication. Nonlinearities and dropout thus determine a neuron’s output together, yet the two
innovations have remained distinct. More, neither subsumed the other because popular stochastic
regularizers act irrespectively of the input and nonlinearities are aided by such regularizers.
In this work, we introduce a new nonlinearity, the Gaussian Error Linear Unit (GELU). It relates
to stochastic regularizers in that it is the expectation of a modification to Adaptive Dropout (Ba &
Frey, 2013). This suggests a more probabilistic view of a neuron’s output. We find that this novel
nonlinearity matches or exceeds models with ReLUs or ELUs across tasks from computer vision,
natural language processing, and automatic speech recognition.
2 GELU F ORMULATION
We motivate our activation function by combining properties from dropout, zoneout, and ReLUs.
First note that a ReLU and dropout both yield a neuron’s output with the ReLU deterministi-
cally multiplying the input by zero or one and dropout stochastically multiplying by zero. Also,
a new RNN regularizer called zoneout stochastically multiplies inputs by one (Krueger et al.,
2016). We merge this functionality by multiplying the input by zero or one, but the values of
this zero-one mask are stochastically determined while also dependent upon the input. Specif-
ically, we can multiply the neuron input xbym∼Bernoulli (Φ(x)), where Φ(x) = P(X≤
∗Work done while the author was at TTIC. Code available at github.com/hendrycks/GELUs
1arXiv:1606.08415v5 [cs.LG] 6 Jun 2023 |
2110.07205.pdf | SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for
Spoken Language Processing
Junyi Ao1,2,∗, Rui Wang3,∗, Long Zhou4,∗, Chengyi Wang4, Shuo Ren4,
Yu Wu4, Shujie Liu4, Tom Ko1, Qing Li2, Yu Zhang1,5, Zhihua Wei3,
Yao Qian4, Jinyu Li4, Furu Wei4
1Department of Computer Science and Engineering,
Southern University of Science and Technology
2Department of Computing, The Hong Kong Polytechnic University
3Department of Computer Science and Technology, Tongji University
4Microsoft5Peng Cheng Laboratory
Abstract
Motivated by the success of T5 (Text-To-
Text Transfer Transformer) in pre-trained nat-
ural language processing models, we pro-
pose a unified-modal SpeechT5 framework
that explores the encoder-decoder pre-training
for self-supervised speech/text representation
learning. The SpeechT5 framework consists
of a shared encoder-decoder network and six
modal-specific (speech/text) pre/post-nets. Af-
ter preprocessing the input speech/text through
the pre-nets, the shared encoder-decoder net-
work models the sequence-to-sequence trans-
formation, and then the post-nets generate the
output in the speech/text modality based on the
output of the decoder. Leveraging large-scale
unlabeled speech and text data, we pre-train
SpeechT5 to learn a unified-modal representa-
tion, hoping to improve the modeling capabil-
ity for both speech and text. To align the tex-
tual and speech information into this unified
semantic space, we propose a cross-modal vec-
tor quantization approach that randomly mixes
up speech/text states with latent units as the
interface between encoder and decoder. Ex-
tensive evaluations show the superiority of the
proposed SpeechT5 framework on a wide va-
riety of spoken language processing tasks, in-
cluding automatic speech recognition, speech
synthesis, speech translation, voice conver-
sion, speech enhancement, and speaker iden-
tification. We release our code and model
athttps://github.com/microsoft/
SpeechT5 .
1 Introduction
Starting with ELMo (Peters et al., 2018) and BERT
(Devlin et al., 2019), substantial work has shown
that pre-trained models can significantly improve
in various natural language processing (NLP) tasks
∗Equal contribution. Work is done by the first two authors
during internship at Microsoft Research Asia. Correspondence
to: Long Zhou ([email protected])
Figure 1: An illustration of the SpeechT5 frame-
work, which treats spoken language processing tasks
as a speech/text to speech/text format, including au-
tomatic speech recognition (ASR), speech translation
(ST), speech identification (SID), text to speech (TTS),
voice conversion (VC), and speech enhancement (SE).
(Radford et al., 2019; CONNEAU and Lample,
2019; Yang et al., 2019; Dong et al., 2019; Lewis
et al., 2020). Following the pre-training techniques
in NLP, self-supervised speech representation learn-
ing has also been investigated and shown promising
results, benefiting from richly learned representa-
tions (Chung and Glass, 2018; Chuang et al., 2020;
Song et al., 2019; Baevski et al., 2020; Wang et al.,
2021; Hsu et al., 2021; Chung et al., 2021a), such
as wav2vec 2.0 (Baevski et al., 2020) and HuBERT
(Hsu et al., 2021).
However, previous speech pre-training work suf-
fers from two problems: (1) most of them learn the
speech representation with only unlabeled speech
data but ignore the importance of textual data
to spoken language tasks (e.g., automatic speech
recognition) which require the modality transfor-
mation; (2) most of these models solely rely on a
pre-trained speech encoder for various downstream
tasks, leaving the decoder not pre-trained for the
sequence-to-sequence generation tasks. How to de-
sign a unified encoder-decoder model that can take
advantage of both unlabeled speech and text data to
improve various spoken language processing tasks
is not well explored.
Inspired by the T5 method (Raffel et al., 2020),arXiv:2110.07205v3 [eess.AS] 24 May 2022 |
image-decoding-paper.pdf | BRAIN DECODING :TOWARD REAL -TIME
RECONSTRUCTION OF VISUAL PERCEPTION
Yohann Benchetrit1,∗, Hubert Banville1,∗, Jean-R ´emi King1,2
1FAIR, Meta,2Laboratoire des Syst `emes Perceptifs, ´Ecole Normale Sup ´erieure, PSL University
{ybenchetrit,hubertjb,jeanremi }@meta.com
ABSTRACT
In the past five years, the use of generative and foundational AI systems has
greatly improved the decoding of brain activity. Visual perception, in particular,
can now be decoded from functional Magnetic Resonance Imaging (fMRI) with
remarkable fidelity. This neuroimaging technique, however, suffers from a lim-
ited temporal resolution ( ≈0.5 Hz) and thus fundamentally constrains its real-time
usage. Here, we propose an alternative approach based on magnetoencephalog-
raphy (MEG), a neuroimaging device capable of measuring brain activity with
high temporal resolution ( ≈5,000 Hz). For this, we develop an MEG decoding
model trained with both contrastive and regression objectives and consisting of
three modules: i) pretrained embeddings obtained from the image, ii) an MEG
module trained end-to-end and iii) a pretrained image generator. Our results are
threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval
over classic linear decoders. Second, late brain responses to images are best de-
coded with DINOv2, a recent foundational image model. Third, image retrievals
and generations both suggest that MEG signals primarily contain high-level visual
features, whereas the same approach applied to 7T fMRI also recovers low-level
features. Overall, these results provide an important step towards the decoding
– in real time – of the visual processes continuously unfolding within the human
brain.
1 I NTRODUCTION
Automating the discovery of brain representations. Understanding how the human brain rep-
resents the world is arguably one of the most profound scientific challenges. This quest, which
originally consisted of searching, one by one, for the specific features that trigger each neuron, ( e.g.
Hubel & Wiesel (1962); O’Keefe & Nadel (1979); Kanwisher et al. (1997)), is now being automated
by Machine Learning (ML) in two mains ways. First, as a signal processing tool, ML algorithms are
trained to extract informative patterns of brain activity in a data-driven manner. For example, Kami-
tani & Tong (2005) trained a support vector machine to classify the orientations of visual gratings
from functional Magnetic Resonance Imaging (fMRI). Since then, deep learning has been increas-
ingly used to discover such brain activity patterns (Roy et al., 2019; Thomas et al., 2022; Jayaram
& Barachant, 2018; D ´efossez et al., 2022; Scotti et al., 2023). Second, ML algorithms are used as
functional models of the brain. For example, Yamins et al. (2014) have shown that the embedding
of natural images in pretrained deep nets linearly account for the neuronal responses to these images
in the cortex. Since, pretrained deep learning models have been shown to account for a wide variety
of stimuli including text, speech, navigation, and motor movement (Banino et al., 2018; Schrimpf
et al., 2020; Hausmann et al., 2021; Mehrer et al., 2021; Caucheteux et al., 2023).
Generating images from brain activity. This observed representational alignment between brain
activity and deep learning models creates a new opportunity: Decoding of visual stimuli need not
be restricted to a limited set of classes, but can now leverage pretrained representations to condi-
tion subsequent generative AI models. While the resulting image may be partly “hallucinated”,
interpreting images can be much simpler than interpreting latent features. Following a long series
∗Equal contribution.
1 |
2402.13064.pdf | Synthetic Data (Almost) from Scratch:
Generalized Instruction Tuning for Language Models
Haoran Li∗, Qingxiu Dong∗, Zhengyang Tang∗, Chaojun Wang∗, Xingxing Zhang∗, Haoyang Huang∗
Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng
Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Benyou Wang, Wai Lam, Furu Wei
https://aka.ms/GeneralAI
Abstract
We introduce Generalized Instruction Tuning (called GLAN ), a general and scal-
able method for instruction tuning of Large Language Models (LLMs). Unlike prior
work that relies on seed examples or existing datasets to construct instruction tuning
data, GLAN exclusively utilizes a pre-curated taxonomy of human knowledge and
capabilities as input and generates large-scale synthetic instruction data across all
disciplines. Specifically, inspired by the systematic structure in human education
system, we build the taxonomy by decomposing human knowledge and capabilities
to various fields, sub-fields and ultimately, distinct disciplines semi-automatically,
facilitated by LLMs. Subsequently, we generate a comprehensive list of subjects
for every discipline and proceed to design a syllabus tailored to each subject, again
utilizing LLMs. With the fine-grained key concepts detailed in every class session
of the syllabus, we are able to generate diverse instructions with a broad coverage
across the entire spectrum of human knowledge and skills. Extensive experiments
on large language models (e.g., Mistral) demonstrate that GLAN excels in mul-
tiple dimensions from mathematical reasoning, coding, academic exams, logical
reasoning to general instruction following without using task-specific training data
of these tasks. In addition, GLAN allows for easy customization and new fields or
skills can be added by simply incorporating a new node into our taxonomy.
1 Introduction
Large Language Models (LLMs) have enabled unprecedented capabilities to understand and generate
text like humans. By scaling up model size and data size [ KMH+20,HBM+22], LLMs are better at
predicting next tokens and prompting to perform certain tasks with a few demonstrations [ BMR+20].
However, these capabilities do not directly translate to better human instruction following [ OWJ+22].
Instruction tuning [ WBZ+21] bridges this gap through fine-tuning LLMs on instructions paired with
human-preferred responses.
Prior work constructs instruction tuning data from seed examples or existing datasets. Initially, natural
language processing (NLP) datasets described via instructions are used to fine-tune LLMs and the
resulting LLMs can generalize on unseen (NLP) tasks [ WBZ+21]. However, there are only thousands
of NLP tasks [ WMA+22,LHV+23] available, which limits the tuned LLMs to generalize in real-
world scenarios [ XSZ+23]. Self-instruct [ WKM+22] is a cost effective method for creating synthetic
instruction tuning datasets, which starts from a small pool of human-written seed instructions and
∗Equal contribution. X. Zhang ([email protected]), H. Huang, S. Huang, X. Huang, Z. Huang,
D. Zhang, X. Wang, S. Chen, L. Dong and F. Wei are with Microsoft. H. Li and W. Lu are with Singapore
University of Technology and Design. Q. Dong, X. Cheng and Z. Sui are with Peking University. Z. Tang and B.
Wang are with Chinese University of Hong Kong, Shenzhen. C. Wang and W. Lam are with Chinese University
of Hong Kong. Y . Gu is with Tsinghua University.arXiv:2402.13064v1 [cs.CL] 20 Feb 2024 |
2402.17764v1.pdf | The Era of 1-bit LLMs:
All Large Language Models are in 1.58 Bits
Shuming Ma∗Hongyu Wang∗Lingxiao Ma Lei Wang Wenhui Wang
Shaohan Huang Li Dong Ruiping Wang Jilong Xue Furu Wei⋄
https://aka.ms/GeneralAI
Abstract
Recent research, such as BitNet [ WMD+23], is paving the way for a new era of 1-
bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant,
namely BitNet b1.58 , in which every single parameter (or weight) of the LLM is
ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer
LLM with the same model size and training tokens in terms of both perplexity
and end-task performance, while being significantly more cost-effective in terms
of latency, memory, throughput, and energy consumption. More profoundly, the
1.58-bit LLM defines a new scaling law and recipe for training new generations of
LLMs that are both high-performance and cost-effective. Furthermore, it enables
a new computation paradigm and opens the door for designing specific hardware
optimized for 1-bit LLMs.
0.2961 -0.0495
0.0413 ...… -0.4765
0.2812 0.2403
-0.1808 0.1304
-0.4809 …… -0.1771
-0.1741 -0.3853Transformer LLMs
16-bit Float ( FP16/BF16)
Cost
Performance
1 -1
0…… 1
-1 -1
-1 1
-1…… 0
0 -1BitNet b1.58 (This Work)
{-1, 0, 1}
W=Pareto Improvement
W=
0.0413 0.3397 0.2812 0.2403
-0.1808 0.1304
-0.4809 0.32440.4322 -0.1771
-0.1741 -0.38530.2961 -0.0495 -0.0924 -0.4765 𝒙𝟎
𝒙𝟏
𝒙𝟐
𝒙𝟑𝟎.𝟐𝟗𝟔𝟏𝒙𝟎−𝟎.𝟎𝟒𝟗𝟓𝒙𝟏−𝟎.𝟎𝟗𝟐𝟒𝒙𝟐−𝟎.𝟒𝟕𝟔𝟓𝒙𝟑
…
1 -1
0 1-1 1
-1 -1
-1 0
-1 11 -1
1 0𝒙𝟎−𝒙𝟏−𝒙𝟐+𝒙𝟑
…𝒙𝟎
𝒙𝟏
𝒙𝟐
𝒙𝟑1(.58) -bitFP16Model W Input X Output Y Y = f(W, X)
GPU
New
Hardware
Figure 1: 1-bit LLMs (e.g., BitNet b1.58 ) provide a Pareto solution to reduce inference cost (latency,
throughput, and energy) of LLMs while maintaining model performance. The new computation
paradigm of BitNet b1.58 calls for actions to design new hardware optimized for 1-bit LLMs.
∗Equal contribution. ⋄Corresponding author. S. Ma, L. Ma, L. Wang, W. Wang, S. Huang, L. Dong, J. Xue,
F. Wei are with Microsoft Research. H. Wang and R. Wang are with University of Chinese Academy of Sciences.arXiv:2402.17764v1 [cs.CL] 27 Feb 2024 |
6593-contrastive-preference-learnin.pdf | Under review as a conference paper at ICLR 2024
CONTRASTIVE PREFERENCE LEARNING : LEARNING
FROM HUMAN FEEDBACK WITHOUT RL
Anonymous authors
Paper under double-blind review
ABSTRACT
Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular
paradigm for aligning models with human intent. Typically RLHF algorithms
operate in two phases: first, use human preferences to learn a reward function and
second, align the model by optimizing the learned reward via reinforcement learn-
ing (RL). This paradigm assumes that human preferences are distributed according
to reward, but recent work suggests that they instead follow the regret under the
user’s optimal policy. Thus, learning a reward function from feedback is not only
based on a flawed assumption of human preference, but also leads to unwieldy
optimization challenges that stem from policy gradients or bootstrapping in the RL
phase. Because of these optimization challenges, contemporary RLHF methods
restrict themselves to contextual bandit settings (e.g., as in large language models)
or limit observation dimensionality (e.g., state-based robotics). We overcome these
limitations by introducing a new family of algorithms for optimizing behavior from
human feedback using the regret -based model of human preferences. Using the
principle of maximum entropy, we derive Contrastive Preference Learning (CPL),
an algorithm for learning optimal policies from preferences without learning re-
ward functions, circumventing the need for RL. CPL is fully off-policy, uses only a
simple contrastive objective, and can be applied to arbitrary MDPs. In contrast to
prior work, this enables CPL to elegantly scale to high-dimensional and sequential
RLHF problems.
1 I NTRODUCTION
As large pretrained models have become increasingly performant, the problem of aligning them
with human preferences have risen to the forefront of research. This alignment is especially difficult
when larger datasets inevitably include suboptimal behaviors. Reinforcement learning from human
feedback (RLHF) has emerged as a popular solution to this problem. Using human preferences,
RLHF techniques discriminate between desirable and undesirable behaviors with the goal of refining
a learned policy. This paradigm has shown promising results when applied to finetuning large
language models (LLMs) (Ouyang et al., 2022), improving image generation models (Lee et al.,
2023), and adapting robot policies (Christiano et al., 2017) – all from suboptimal data. For most
RLHF algorithms, this process includes two phases. First, a reward model is trained from collected
user preference data. And second, that reward model is optimized by an off-the-shelf reinforcement
learning (RL) algorithm.
Unfortunately, this two-phase paradigm is founded on a flawed assumption. Algorithms that learn
reward models from preference data require that human preferences are distributed according to
the discounted sum of rewards or partial return of each behavior segment. However, recent work
(Knox et al., 2022) calls this into question, positing that humans instead provide preferences based
on the regret of each behavior under the optimal policy of the expert’s reward function. Intuitively,
a human’s judgement is likely based on optimality, instead of which states and actions have higher
quantity for reward. As a result, the correct quantity to learn from feedback might not be the reward,
but instead the optimal advantage function or, in other words, the negated regret.
In their second phase, two-phase RLHF algorithms optimize the reward function learned from the first
phase with RL. In practice, RL algorithms suffer from a suite of optimization challenges stemming
from temporal credit assignment, such as the high-variance of policy gradients (Marbach & Tsitsiklis,
1 |
10.1101.2022.12.21.521521.pdf | Language models generalize beyond natural proteins
Robert Verkuil1 *Ori Kabeli1 *Yilun Du1 2Basile I. M. Wicky3 4Lukas F. Milles3 4Justas Dauparas3 4
David Baker3 4 5Sergey Ovchinnikov6Tom Sercu1Alexander Rives1 7 †
Abstract
Learning the design patterns of proteins from sequences
across evolution may have promise toward generative
protein design. However it is unknown whether lan-
guage models, trained on sequences of natural proteins,
will be capable of more than memorization of existing
protein families. Here we show that language models
generalize beyond natural proteins to generate de novo
proteins. We focus on two protein design tasks: fixed
backbone design where the structure is specified, and
unconstrained generation where the structure is sampled
from the model. Remarkably although the models are
trained only on sequences, we find that they are capable
of designing structure. A total of 228 generated pro-
teins are evaluated experimentally with high overall suc-
cess rates (152/228 or 67%) in producing a soluble and
monomeric species by size exclusion chromatography.
Out of 152 experimentally successful designs, 35 have
no significant sequence match to known natural proteins.
Of the remaining 117, sequence identity to the nearest
sequence match is at median 27%, below 20% for 6
designs, and as low as 18% for 3 designs. For fixed back-
bone design, the language model generates successful
designs for each of eight experimentally evaluated artifi-
cially created fixed backbone targets. For unconstrained
generation, sampled proteins cover diverse topologies
and secondary structure compositions, and have high
experimental success rate (71/129 or 55%). The designs
reflect deep patterns linking sequence and structure, in-
cluding motifs that occur in related natural structures,
and motifs that are not observed in similar structural
contexts in known protein families. The results show
that language models, though only trained on sequences,
learn a deep grammar that enables the design of protein
structure, extending beyond natural proteins.
*Equal contribution1Meta Fundamental AI Research Protein Team
(FAIR).2Massachusetts Institute of Technology. Work performed as
visiting researcher at Meta FAIR.3Department of Biochemistry, Uni-
versity of Washington, Seattle, WA, USA.4Institute for Protein De-
sign, University of Washington, Seattle, WA, USA.5Howard Hughes
Medical Institute, University of Washington, Seattle, WA, USA.6John
Harvard Distinguished Science Fellowship Program, Harvard Univer-
sity, Cambridge, MA, USA.7New York University. Data available
at:<https://github.com/facebookresearch/esm >.†Correspondence to
<[email protected]>.
Preprint. Copyright 2022 by the authors.Introduction
Generative artificial intelligence for biology has potential to open
up a space of protein design beyond natural proteins. Since amino
acid sequences are the fundamental codes of proteins, learning
to read and write these codes with a language model may have
promise. Language models have played a central role in recent
advances in artificial intelligence ( 1), including developments in
complex reasoning, mathematical problem solving, image gener-
ation, and natural language generation ( 2–4). Scaling laws link
performance with the compute, data, and number of parameters
used to train the models ( 5), and emergence of higher level capabil-
ities is observed with increasing scale ( 6). In biology, recent work
on evolutionary scale language models of proteins has shown that
a deep knowledge of intrinsic biological properties emerges from
training on protein sequences ( 7). Information about the folded
three dimensional structure of proteins develops within the mod-
els, extending to atomic resolution structure ( 8). This information
emerges through training on sequences alone. At the same time
the structural information that emerges as a result of training on
sequences has been shown to depend on the available evolutionary
information, varying as a function of the number of related proteins
in the training data ( 8,9). It is an open question across domains to
what extent language models are capable of generalizing outside
their training data. In biology, it is unknown whether language
models can be used to explore a design space beyond that of natural
proteins.
Here we demonstrate that language models generalize beyond nat-
ural proteins to generate de novo proteins, different in sequence
and structure from natural proteins. We experimentally validate a
large number of designs spanning diverse topologies and sequences.
We find that although language models are trained only on the se-
quences of proteins, they are capable of designing protein structure,
including structures of artificially engineered de novo proteins that
are distinct from those of natural proteins. Given the backbone of
ade novo protein structure as a target, the language model gener-
ates sequences that are predicted to fold to the specified structure.
When the sequence and structure are both free, language models
produce designs that span a wide range of fold topologies and
secondary structure compositions, creating proteins which overlap
the natural sequence distribution as well as extend beyond it. De-
signs succeed experimentally across the space of sampled proteins,
including many designs that are distant in sequence from natural
proteins. The model generates motifs that link sequence to the
design of structure and can apply them in new sequence and struc-
tural contexts, including motifs such as complex hydrogen bond. CC-BY-NC-ND 4.0 International license available under a(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted December 22, 2022. ; https://doi.org/10.1101/2022.12.21.521521doi: bioRxiv preprint |
2404.16767v1.pdf | REBEL : Reinforcement Learning via Regressing
Relative Rewards
Zhaolin Gao♣, Jonathan D. Chang♣, Wenhao Zhan♦, Owen Oertell♣, Gokul Swamyr, Kianté
Brantley♣, Thorsten Joachims♣, J. Andrew Bagnellr, Jason D. Lee♦, Wen Sun♣
♣Cornell University∗ ♦Princeton University†rCarnegie Mellon University‡
Abstract
Whileoriginallydevelopedforcontinuouscontrolproblems,ProximalPolicyOpti-
mization(PPO)hasemergedasthework-horseofavarietyofreinforcementlearning
(RL) applications including the fine-tuning of generative models. Unfortunately,
PPO requires multiple heuristics to enable stable convergence (e.g. value networks,
clipping)andisnotoriousforitssensitivitytothepreciseimplementationofthese
components. Inresponse,wetakeastepbackandaskwhata minimalist RLalgorithm
forthe eraof generativemodelswould looklike. Wepropose REBEL,an algorithm
that cleanly reduces the problem of policy optimization to regressing the relative
rewards via a direct policy parameterization between two completions to a prompt,
enablingstrikinglylightweightimplementation. Intheory,weprovethatfundamental
RL algorithms like Natural Policy Gradient can be seen as variants of REBEL, which
allowsustomatchthestrongestknowntheoreticalguaranteesintermsofconvergence
and sample complexity in the RL literature. REBELcan also cleanly incorporate
offline data and handle the intransitive preferences we frequently see in practice.
Empirically,wefindthat REBELprovidesaunifiedapproachtolanguagemodeling
and image generation with stronger or similar performance as PPO and DPO, all
while being simpler to implement and more computationally tractable than PPO.
1 Introduction
The generality of the reinforcement learning (RL) paradigm is striking: from continuous control
problems (Kalashnikov et al., 2018) to, recently, the fine-tuning of generative models (Stiennon et al.,
2022;Ouyangetal.,2022),RLhasenabledconcreteprogressacrossavarietyofdecision-making
tasks. Specifically,whenitcomestofine-tuninggenerativemodels,ProximalPolicyOptimization
(PPO,Schulmanetal.(2017))hasemergedasthede-factoRLalgorithmofchoice,fromlanguage
models(LLMs)(Ziegleretal.,2020;Stiennonetal.,2022;Ouyangetal.,2022;Touvronetal., 2023)
to image generative models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024).
If we take a step back however, it is odd that we are using an algorithm designed for optimizing
two-layernetworksforcontinuouscontroltasksfromscratchforfine-tuningthebillionsofparameters
∗{zg292, jdc396, ojo2, kdb82, ws455}@cornell.edu ,[email protected]
†{wenhao.zhan, jasonlee}@princeton.edu
‡{gswamy,bagnell2}@andrew.cmu.edu
1arXiv:2404.16767v1 [cs.LG] 25 Apr 2024 |
8781-turing-complete-transformers-t.pdf | Under review as a conference paper at ICLR 2023
TURING COMPLETE TRANSFORMERS : T WOTRANS -
FORMERS AREMORE POWERFUL THAN ONE
Anonymous authors
Paper under double-blind review
ABSTRACT
This paper presents Find+Replace transformers, a family of multi-transformer
architectures that can provably do things no single transformer can, and which
outperform GPT-4 on several challenging tasks. We first establish that tra-
ditional transformers and similar architectures are not Turing complete, while
Find+Replace transformers are. Using this fact, we show how arbitrary programs
can be compiled to Find+Replace transformers, aiding interpretability research.
We also demonstrate the superior performance of Find+Replace transformers over
GPT-4 on a set of composition challenge problems (solving problems 20x longer
on tower of Hanoi, 3%→100% on multiplication, 72%→100% on a dynamic
programming problem). This work aims to provide a theoretical basis for multi-
agent architectures, and to encourage their exploration.
1 I NTRODUCTION
The first computers – including the difference engine , differential analyzer, Z1, and ABC (Bab-
bage & Babbage, 1825; Bush, 1931; Rojas, 2014; Atanasoff, 1940) – were not Turing Complete.
Some such machines, like the Hollerith Tabulating Machine (Hollerith, 1889) and the Harvard Mark
I (Comrie, 1946), even achieved considerable real-world use despite that limitation. However, the
advent of Turing Complete computers (Turing et al., 1936; Goldstine & Goldstine, 1946; Kilburn,
1949) fundamentally changed how computers were used and led to the development of more com-
plex, comprehensible, and composable programs (Backus, 1954; Copeland, 2004).
As we will show in this paper, current LLMs based on the transformer architecture (Vaswani et al.,
2017) are not Turing Complete. We present an alternative that is.
The fundamental reason transformers are not Turing complete is that, once the architecture of a
transformer is decided, there is a bounded amount of computation that it can do. This guarantees the
model will fail to generalize beyond input of some length and complexity. Such limitations are not
only theoretical, they are supported by a number of recent results on the ability of language models
to generalize to large context lengths (Del’etang et al., 2022; Liu et al., 2023; Dziri et al., 2023).
Addressing these deficiencies is nontrivial and requires a fundamental shift in approach. We propose
an approach drawing from multi-agent systems (Messing, 2003; Stone & Veloso, 2000), particularly
multi-transformer systems. Such systems have recently garnered interest, being employed to gener-
ate simulacra of human behavior (Park et al., 2023), perform economic simulations (Horton, 2023),
and demonstrate open-ended exploration in games like Minecraft (Wang et al., 2023a).
This paper presents a family of multi-transformer architectures, and provides theoretical and em-
pirical evidence the family can outperform traditional transformers. We hope this study will ignite
further investigations into architectures that are multi-transformer and Turing complete.
Our contributions are as follows:
• We provide a simple proof that current LLMs are not Turing Complete
• We present Find+Replace transformers, a family of provably Turing Complete architectures
• We introduce a method for turning any program into a Find+Replace transformer
• We show that Find+Replace transformers out-perform GPT-4 on a set of challenge tasks
1 |
2404.09173.pdf | TransformerFAM: Feedback attention is working memory
Dongseong Hwang1Weiran Wang1Zhuoyuan Huo1Khe Chai Sim1Pedro Moreno Mengibar1
Abstract
While Transformers have revolutionized deep
learning, their quadratic attention complexity hin-
ders their ability to process infinitely long inputs.
We propose Feedback Attention Memory (FAM),
a novel Transformer architecture that leverages
a feedback loop to enable the network to at-
tend to its own latent representations. This de-
sign fosters the emergence of working memory
within the Transformer, allowing it to process
indefinitely long sequences. TransformerFAM
requires no additional weights, enabling seam-
less integration with pre-trained models. Our
experiments show that TransformerFAM signif-
icantly improves Transformer performance on
long-context tasks across various model sizes (1B,
8B, and 24B). These results showcase the poten-
tial to empower Large Language Models (LLMs)
to process sequences of unlimited length.
1. Introduction
The introduction of the Transformer architecture (Vaswani
et al., 2017) has revolutionized deep learning by permeat-
ing diverse domains and enhancing performance due to its
efficacy and scalability. This scalability fuels a trend analo-
gous to Moore’s law, which links increased model size to
performance gains (Kaplan et al., 2020).
The effectiveness of attention in text sequence processing
was solidified through the Transformer paper. Models like
BERT (Devlin et al., 2018) and GPT-3 (Brown et al., 2020)
further showcased the scalability of Transformer and its
tendency for improved performance with increased model
size. Following the replacement of LSTM (Hochreiter &
Schmidhuber, 1997) by Transformer in most Natural Lan-
guage Processing (NLP) domains, the Vision Transformer
(ViT) (Dosovitskiy et al., 2020) replaced Convolutional
Neural Network (CNN) (LeCun et al., 1995) with Trans-
formers in the vision domain, and Conformer (Convolution-
augmented Transformer) (Gulati et al., 2020) replaced
1Google LLC, Mountain View, CA, USA. Correspondence to:
Dongseong Hwang <[email protected] >.LSTM in the speech domain. The Transformer has become
the de facto architecture in various domains. Currently,
attention serves as the leading architecture for extracting
meaningful representations from homogeneous data.
The logical progression points toward extending attention ca-
pabilities to heterogeneous data. This has enabled advances
in multimodal fusion (text and vision), as seen in models
like DALL ·E 2 (Ramesh et al., 2022), Flamingo (Alayrac
et al., 2022) and CoCa (Yu et al., 2022). AudioLM (Bor-
sos et al., 2023) has shown that attention also excels at
fusing audio and text. Consequently, Gemini (Team et al.,
2023) integrates text, images, audio, and video into a single
generative model. This was possible because attention to
heterogeneous data works exceptionally well.
Despite the impressive success of attention, it suffers from
major drawbacks. Firstly, attention has quadratic complexity
with respect to context length, which limits the capability
of modeling long contexts. Secondly, it forgets information
from context before attention window, unlike LSTM, which
theoretically can propagate information indefinitely. We
want the better architecture to be able to process arbitrarily
long sequences efficiently, while preserving very long-term
dependencies.
Sliding window attention is introduced (Dai et al., 2019;
Beltagy et al., 2020) to handle infinitely long sequences
as input. However, it disregards information beyond the
effective receptive field (approximately model depth ×win-
dow size in Figure 1d) during sequence generation. Vari-
ous approaches have attempted to address the long-context
problem by sparse attention (Child et al., 2019; Beltagy
et al., 2020; Zaheer et al., 2020; Kitaev et al., 2020; Roy
et al., 2021; Oren et al., 2024) and linear approximated
attention (Wang et al., 2020; Katharopoulos et al., 2020;
Choromanski et al., 2020; Xiong et al., 2021), showing ef-
fectiveness below the 1B scale. Yet, scaling laws (Tay et al.,
2022) suggest these approximations do not perform as well
at GPT-3 level. Current SoTA large language model (LLM)
leaderboards (LMSYS, 2023) do not feature architectures
primarily relying on approximated attention.
On the other hand, Neuroscience links attention to multi-
sensory integration (Tang et al., 2016). Endogenous (goal-
driven) and exogenous (stimulus-driven) attention are dis-
tributed throughout sensory processing regions in brain, and
1arXiv:2404.09173v1 [cs.LG] 14 Apr 2024 |
1002.1945v2.pdf | arXiv:1002.1945v2 [math.GR] 14 May 2010HYDRA GROUPS
W.DISONAND T.R.RILEY
Abstract. Wegive examples of CAT(0), biautomatic, free–by–cyclic, one–relator groups
which have finite–rank free subgroups of huge (Ackermannian ) distortion. This leads to
elementary examples of groups whose Dehn functions are simi larly extravagant. This
behaviour originates in manifestations of Hercules–versus–the–hydra battles in string–
rewriting.
2010 Mathematics Subject Classification: 20F65, 20F10,20F 67
Keywords and phrases: free–by–cyclic group, subgroup distortion, Dehn function , hydra
1. Introduction
1.1.Hercules versus the hydra. Hercules’ second labour was to fight the Lernaean hy-
dra, a beast with multiple serpentine heads enjoying magica l regenerative powers: when-
ever a head was severed, two grew in its place. Hercules succe eded with the help of his
nephew,Iolaus,whostoppedtheregrowthbysearingthestum pswithaburningtorchafter
each decapitation. The extraordinarily fast–growing func tions we will encounter in this
articlestem fromare-imaginingofthisbattle.
For us, a hydrawill be a finite–length positiveword on the alphabet a1,a2,a3,...— that
is, it includesnoinverseletters a1−1,a2−1,a3−1,.... Herculesfightsa hydrabystrikingo ff
itsfirstletter. Thehydrathenregeneratesasfollows: each remainingletter ai,wherei>1,
becomes aiai−1and thea1are unchanged. This process — removal of the first letter and
thenregeneration— repeats, with Herculesvictorious when(notif!) the hydrais reduced
totheemptyword ε.
Forexample,Herculesdefeatsthehydra a2a3a1in fivestrikes:
a2a3a1→a3a2a1→a2a1a1→a1a1→a1→ε.
(Eacharrowrepresentstheremovalofthefirst letterandthe nregeneration.)
Proposition1.1. Herculesdefeatsall hydra.
Proof.When fighting a hydra in which the highest index present is k, noaiwithi>k
will ever appear, and nor will any new ak. The prefix before the first akis itself a hydra,
which, by induction, we can assume Hercules defeats. Hercul es will then removethat ak,
decreasingthetotalnumberof akpresent. It followsthat Herculeseventuallywins. □
However these battles are of extreme duration. Define H(w) to be the number of strikes
it takes Hercules to vanquish the hydra w, and for integers k≥1,n≥0, defineHk(n) :=
Date: October 23, 2018.
1 |
2303.07678.pdf | Query2doc: Query Expansion with Large Language Models
Liang Wang and Nan Yang and Furu Wei
Microsoft Research
{wangliang,nanya,fuwei}@microsoft.com
Abstract
This paper introduces a simple yet effec-
tive query expansion approach, denoted as
query2doc , to improve both sparse and dense re-
trieval systems. The proposed method first gen-
erates pseudo-documents by few-shot prompt-
ing large language models (LLMs), and then
expands the query with generated pseudo-
documents. LLMs are trained on web-scale text
corpora and are adept at knowledge memoriza-
tion. The pseudo-documents from LLMs often
contain highly relevant information that can
aid in query disambiguation and guide the re-
trievers. Experimental results demonstrate that
query2doc boosts the performance of BM25 by
3% to 15% on ad-hoc IR datasets, such as MS-
MARCO and TREC DL, without any model
fine-tuning. Furthermore, our method also ben-
efits state-of-the-art dense retrievers in terms of
both in-domain and out-of-domain results.
1 Introduction
Information retrieval (IR) aims to locate relevant
documents from a large corpus given a user is-
sued query. It is a core component in modern
search engines and researchers have invested for
decades in this field. There are two mainstream
paradigms for IR: lexical-based sparse retrieval,
such as BM25, and embedding-based dense re-
trieval (Xiong et al., 2021; Qu et al., 2021). Al-
though dense retrievers perform better when large
amounts of labeled data are available (Karpukhin
et al., 2020), BM25 remains competitive on out-of-
domain datasets (Thakur et al., 2021).
Query expansion (Rocchio, 1971; Lavrenko
and Croft, 2001) is a long-standing technique
that rewrites the query based on pseudo-relevance
feedback or external knowledge sources such as
WordNet. For sparse retrieval, it can help bridge
the lexical gap between the query and the docu-
ments. However, query expansion methods like
RM3 (Lavrenko and Croft, 2001; Lv and Zhai,
2009) have only shown limited success on populardatasets (Campos et al., 2016), and most state-of-
the-art dense retrievers do not adopt this technique.
In the meantime, document expansion methods like
doc2query (Nogueira et al., 2019) have proven to
be effective for sparse retrieval.
In this paper, we demonstrate the effectiveness
of LLMs (Brown et al., 2020) as query expan-
sion models by generating pseudo-documents con-
ditioned on few-shot prompts. Given that search
queries are often short, ambiguous, or lack neces-
sary background information, LLMs can provide
relevant information to guide retrieval systems, as
they memorize an enormous amount of knowledge
and language patterns by pre-training on trillions
of tokens.
Our proposed method, called query2doc , gen-
erates pseudo-documents by few-shot prompting
LLMs and concatenates them with the original
query to form a new query. This method is simple
to implement and does not require any changes in
training pipelines or model architectures, making it
orthogonal to the progress in the field of LLMs and
information retrieval. Future methods can easily
build upon our query expansion framework.
For in-domain evaluation, we adopt the MS-
MARCO passage ranking (Campos et al., 2016),
TREC DL 2019 and 2020 datasets. Pseudo-
documents are generated by prompting an im-
proved version of GPT-3 text-davinci-003 from
OpenAI (Brown et al., 2020). Results show
thatquery2doc substantially improves the off-the-
shelf BM25 algorithm without fine-tuning any
model, particularly for hard queries from the
TREC DL track. Strong dense retrievers, in-
cluding DPR (Karpukhin et al., 2020), SimLM
(Wang et al., 2023), and E5 (Wang et al., 2022)
also benefit from query2doc , although the gains
tend to be diminishing when distilling from a
strong cross-encoder based re-ranker. Experi-
ments in zero-shot OOD settings demonstrate that
our method outperforms strong baselines on mostarXiv:2303.07678v2 [cs.IR] 11 Oct 2023 |
2303.03378.pdf | PaLM-E: An Embodied Multimodal Language Model
Danny Driess1 2Fei Xia1Mehdi S. M. Sajjadi3Corey Lynch1Aakanksha Chowdhery3
Brian Ichter1Ayzaan Wahid1Jonathan Tompson1Quan Vuong1Tianhe Yu1Wenlong Huang1
Yevgen Chebotar1Pierre Sermanet1Daniel Duckworth3Sergey Levine1Vincent Vanhoucke1
Karol Hausman1Marc Toussaint2Klaus Greff3Andy Zeng1Igor Mordatch3Pete Florence1
1Robotics at Google2TU Berlin3Google Research
https://palm-e.github.io
Giv en <im g> . Q: Wh a t’ s in t he
im a g e? An sw er in emo jis.
A: 🍏🍌🍇🍐🍑🍈🍒
.
Sc ene Und er s t andin g
V isu al Q&A Giv en <im g1> and
<im g2> Q: wh a t
di d t he r o bo t p i ck
up ? A: The
mul t i-gr ain chip s.
Giv en <im g1> Q:
wh a t d a y s mi gh t I
mo s t c o mmo nly g o
t o t his b uil din g?
A: S und a y . TBD TBD
TBD T a sk and M o t i o n P l annin g
Giv en <emb> Q: H o w
t o gr a s p b lu e b l ock?
A: F ir s t gr a s p y ell o w
b l ock and p l a c e i t o n
t he t ab l e, t hen gr a s p
t he b lu e b l ock.
Giv en <im g> T a sk: So rt
c o l o r s in t o c o rner s.
S t ep 1. P u sh t he gr e en
s t ar t o t he bo tt o m l e ft.
S t ep 2. P u sh t he gr e en
cir cl e t o t he gr e en s t ar .
T ab l e t o p M anip ul a t i o n M o b il e M anip ul a t i o n
V isu al Q&A, C ap t i o nin g …
H um an: Brin g me t he ri c e chip s fr o m t he
dr a w er . R o bo t: 1. G o t o t he dr a w er s, 2. Open
t o p dr a w er . I s e e <im g> . 3 . Pi ck t he gr e en ri c e
chip ba g fr o m t he dr a w er and p l a c e i t o n t he
c o un t er . A: F ir s t, gr a s p y ell o w b l ock and … Giv en <emb> … <im g> Q: H o w t o gr a s p b lu e b l ock? A: F ir s t, gr a s p y ell o w b l ock
Lar g e Lan g u a g e M od el (P aLM) ?
C o n tr o l P aLM-E: An E mbodi e d M ul t i mod al Lan g u a g e M od el
… … V iT
Lan g u a g e O nly T a sk s
H er e is a H aiku abo u t
embodi e d l an g u a g e mod el s:
E mbodi e d l an g u a g e
mod el s ar e t he f u tur e o f
n a tur al l an g u a g e
D e s c ribe t he
f o ll o win g <im g> :
A d og jump in g
o v er a hur dl e a t a
d og sho w .
… …
Q: Mi ami Be a ch bo r d er s whi ch oc e an ? A: A t l an t i c.
Q: Wh a t is 372 x 18? A: 669 6.
Lan g u a g e mod el s tr aine d o n r o bo t s en s o r d a t a c an
be u s e d t o g ui d e a r o bo t’ s a c t i o n s. PROMPT:
Q: How can embodied language
models benefit robots? A:
PREDICTION:
Embodied language models can
benefit robots by allowing them to
learn language in a more natural
way. PROMPT:
Language models which understand
robot sensor data can
PREDICTION:
be used to generate natural
language descriptions of the
robot's environment.
Figure 1: PaLM-E is a single general-purpose multimodal language model for embodied reasoning tasks, visual-language tasks,
and language tasks. PaLM-E transfers knowledge from visual-language domains into embodied reasoning – from robot planning in
environments with complex dynamics and physical constraints, to answering questions about the observable world. PaLM-E operates on
multimodal sentences , i.e. sequences of tokens where inputs from arbitrary modalities (e.g. images, neural 3D representations, or states, in
green and blue) are inserted alongside text tokens (in orange) as input to an LLM, trained end-to-end.
Abstract
Large language models have been demonstrated to perform
complex tasks. However, enabling general inference in the
real world, e.g. for robotics problems, raises the challenge
of grounding. We propose embodied language models to di-
rectly incorporate real-world continuous sensor modalities
into language models and thereby establish the link be-
tween words and percepts. Input to our embodied language
model are multi-modal sentences that interleave visual, con-
tinuous state estimation, and textual input encodings. We
train these encodings end-to-end, in conjunction with a pre-
trained large language model, for multiple embodied tasks
including sequential robotic manipulation planning, visual
question answering, and captioning. Our evaluations show
that PaLM-E, a single large embodied multimodal model,
can address a variety of embodied reasoning tasks, from a
variety of observation modalities, on multiple embodiments,
and further, exhibits positive transfer: the model benefits
from diverse joint training across internet-scale language,
vision, and visual-language domains. Our largest model,
PaLM-E-562B with 562B parameters, in addition to being
trained on robotics tasks, is a visual-language generalist
with state-of-the-art performance on OK-VQA, and retains
generalist language capabilities with increasing scale.1. Introduction
Large language models (LLMs) demonstrate strong reason-
ing capabilities across various domains, including dialogue
(Glaese et al., 2022; Thoppilan et al., 2022), step-by-step
reasoning (Wei et al., 2022; Kojima et al., 2022), math prob-
lem solving (Lewkowycz et al., 2022; Polu et al., 2022), and
code writing (Chen et al., 2021a). However, a limitation of
such models for inference in the real world is the issue of
grounding: while training LLMs on massive textual data
may lead to representations that relate to our physical world,
connecting those representations toreal-world visual and
physical sensor modalities is essential to solving a wider
range of grounded real-world problems in computer vision
and robotics (Tellex et al., 2020). Previous work (Ahn et al.,
2022) interfaces the output of LLMs with learned robotic
policies and affordance functions to make decisions, but is
limited in that the LLM itself is only provided with textual
input, which is insufficient for many tasks where the geo-
metric configuration of the scene is important. Further, in
our experiments we show that current state-of-the-art visual -
language models trained on typical vision-language tasks
such as visual-question-answering (VQA) cannot directly
solve robotic reasoning tasks.
In this paper we propose embodied language models, whicharXiv:2303.03378v1 [cs.LG] 6 Mar 2023 |
Subsets and Splits