text
stringlengths 1
1k
⌀ | title
stringclasses 230
values |
---|---|
beyond the full GPT-3 175B parameter model with no changes outside of model configurations.
The Weight Streaming design stands in contrast with existing accelerator execution modes. Recent trends in
large language model training typically require parallelizing training across tens to thousands of accelerator
devices, such as GPUs. These efforts require complicated combinations of data and model parallelism (e.g.,
(Smith et al., 2022)). Models must be carefully divided to fit into memory close to the devices to achieve high
throughput at relatively small per-device batch sizes. Weight Streaming permits moving the weights to the
wafer and gradients from the wafer—achieving solid performance at small per-system batch sizes—without
the need for model parallelism. | Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |
Neural execution engines: Learning to execute subroutines. NeurIPS.
Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, and Xiang Ren. 2021. Refining language models
with compositional explanations. NeurIPS.
Xi Ye and Greg Durrett. 2022. The unreliability of explanations in few-shot in-context learning.
arXiv preprint arXiv:2205.03401.
Yordan Yordanov, Vid Kocijan, Thomas Lukasiewicz, and Oana-Maria Camburu. 2021. Few-shot
out-of-domain transfer learning of natural language explanations. arXiv preprint arXiv:2112.06204.
Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using “annotator rationales” to improve
machine learning for text categorization. NAACL.
Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. arXiv preprint arXiv:1410.4615.
Eric Zelikman, Yuhuai Wu, and Noah D. Goodman. 2022. STaR: Bootstrapping reasoning with
reasoning. arXiv preprint arXiv:2203.14465.
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: | Chain-of-Thought Prompting Elicits Reasoning in Large Language Models |
[71] OpenAI. Gpt-4 technical report, 2023. 4, 6, 7
[72] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-
Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.
Lora: Low-rank adaptation of large language models. arXiv
preprint arXiv:2106.09685, 2021. 4, 5, 20, 29
[73] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.
Blip-2: Bootstrapping language-image pre-training with
frozen image encoders and large language models. arXiv
preprint arXiv:2301.12597, 2023. 5, 6
[74] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong
Qiu, Zhilin Yang, and Jie Tang. Glm: General language
model pretraining with autoregressive blank infilling.
In
Proceedings of the 60th Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Papers),
pages 320–335, 2022. 6, 7
[75] Ryuichiro Hataya, Han Bao, and Hiromi Arai. Will large-
scale generative models corrupt future datasets? In ICCV,
2023. 6, 27 | Let’sThinkOutsidetheBox |
128:6
• Villa et al.
3.1.6 Moral Foundations Questionnaire (MFQ). Numerous factors, such as sociocultural context and individual
personality traits influence the perceptions of morality. We adapted the MFQ [31] to evaluate how the observer
integrates the concept of human augmentation into their personal values, cultural norms, and political ideologies.
The MFQ quantifies moral convictions. It assesses an individual’s moral sensitivity across five theoretical
dimensions, including Harm/Care, Fairness/Reciprocity, Ingroup/Loyalty, Authority/Respect, and Purity/Sanctity.
We extracted and adapted the items relevant to human augmentation. e.g., the item "Whether or not someone
violated standards of purity and decency" from the MFQ is reflected in the initial pool of items as " An augmented
human would violate standards of purity and decency." | Society’sAttitudesTowardsHumanAugmentation |
26https://github.com/bigcode-project
27https://huggingface.co/datasets/bigcode/the-stack
32
salaried employees were considered, the decision to work with Toloka crowd-workers was taken after
a review of service providers and their compensation practices - most would not provide sufficient
transparency and guarantees about worker compensation. The determination of compensation took
into consideration different minimum wage rates across countries and their corresponding purchasing
power. We limited annotation eligibility to countries where the hourly pay rate of $7.30 was equivalent
to the highest minimum wage in the US ($16.50) in terms of purchasing power parity. | StarCoder_paper (1) |
Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk
Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito,
David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani
Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor
Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang,
Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck,
Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022. | Llama2 |
Journal of Information Science, 2022, pp. 1–11 (cid:2) The Author(s), DOI: 10.1177/01655515221112844
•
•
•
•
•
•
Rajabi and Etminani
5 | Knowledge-graph-based explainable AI- A systematic review |
Som, et al. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. arXiv preprint arXiv:2208.10442, 2022.
[111] Albert Webson and Ellie Pavlick. Do prompt-based models really understand the meaning of their prompts? In Proceedings of the 2022 Conference
of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2300–2344, 2022.
[112] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language
models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
[113] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald
Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models.
Transactions on Machine Learning Research, 2022. Survey Certification. | Harnessing the Power of LLMs in Practice- A Survey on ChatGPT and Beyond |
input: 0, 0, 0, 0 0, 3, 4, 0 0, 7, 6, 0 0, 0, 0, 0 output: 3, 0, 0, 4 0, 0, 0, 0 0, 0, 0, 0 7, 0, 0, 6input: 0, 0, 0, 0 0, 5, 6, 0 0, 8, 3, 0 0, 0, 0, 0 output: 5, 0, 0, 6 0, 0, 0, 0 0, 0, 0, 0 8, 0, 0, 3input: 0, 0, 0, 0 0, +#, B, 0 0, @, 慶, 0 0, 0, 0, 0 output: +#, 0, 0, B 0, 0, 0, 0 0, 0, 0, 0 @, 0, 0, 慶 Fig. 2: Pre-trained LLMs out-of-the-box may serve as basic versions of general pattern machines that can recognize and
complete sequences of numeric or arbitrary (symbolic) tokens expressing abstract problems in robotics and sequential
decision-making. Experiments show that to an extent, LLMs can in-context learn (i) sequence transformations (e.g.,
to reason over spatial rearrangements of symbols, for dynamics modeling and next state prediction on downsampled
images), (ii) completion of simple functions (e.g., to extrapolate kinesthetic demonstrations), or (iii) meta-patterns to | LargeLanguageModelsasGeneralPatternMachines |
(2023) posited that excessively intricate exemplars do not aid simple problems. Instead, they added
the demonstrations with the correct answer to the samples pool and optimized the demonstrations
selection model via reinforcement learning. These studies either disregarded the hazards of utilizing
demonstrations containing flawed reasoning chains or refrained from employing demonstrations
containing reasoning chains that would lead to erroneous answers.
Through the self-correction, our approach selects challenging yet answerable questions accompanied
by reasoning chains as exemplars, with a moderate level of difficulty, which enhances the LLMs’
generalizability across varying levels of difficulty. | Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models |
Figure 3: Our proposed 1D U-Net architecture. Each
UNetBlock (top) consists of several U-Net items (bot-
tom). In each U-Net item (bottom), we use a 1D con-
volutional ResNet (R), and a modulation unit (M) to
provide the diffusion noise level as a feature vector con-
ditioning (
). For Stage 1, we use an inject item (I)
to inject external channels as conditioning (
), and for
Stage 2, we use an attention item (A) to share time-wise
information, and a cross-attention item (C) to condition
on an external (text) embedding (
). Moreover, for the
UNetBlocks, we can recursively nest them, which we
indicate by the inner dashed region on the top.
the diffusion noise level, and an inject item to con-
catenate external channels to the ones at the current
depth. Note that inject items are applied only at a
specific depth in the decoder in the first stage to
condition on the latent representation of the music. | MOUSAI |
During each round of code generation, we execute the generated program to obtain environment
feedback and execution errors from the code interpreter, which are incorporated into GPT-4’s prompt
for the next round of code refinement. This iterative process repeats until self-verification validates
the task’s completion, at which point we add this new skill to the skill library and ask the automatic
curriculum for a new objective (Fig. 2). If the agent gets stuck after 4 rounds of code generation, then
we query the curriculum for another task. This iterative prompting approach significantly improves
program synthesis for embodied control, enabling VOYAGER to continuously acquire diverse skills
without human intervention.
3 Experiments
3.1 Experimental Setup | VOYAGER- An Open-Ended Embodied Agent with Large Language Models |
m
e
t
h
o
d
a
t
p
r
e
d
i
c
t
i
n
g
h
u
m
a
n
p
r
e
f
e
r
e
n
c
e
s
b
e
t
w
e
e
n
e
x
p
l
a
n
a
t
i
o
n
s
.
S
t
e
p
3
:
S
c
o
r
e
t
h
e
e
x
p
l
a
n
a
t
i
o
n
s
b
y
c
o
m
p
a
r
i
n
g
t
h
e
s
i
m
u
l
a
t
e
d
a
n
d
a
c
t
u
a
l
n
e
u
r
o
n
b
e
h
a
v
i
o
r
C
o
n
c
e
p
t
u
a
l
l
y
,
g
i
v
e
n
a
n
e
x
p
l
a
n
a
t
i
o
n
a
n
d
s
i
m
u
l
a
t
i
o
n
s
t
r
a
t
e
g
y
,
w
e
n
o
w
h
a
v
e
a
s
i
m
u
l
a
t
e
d
n
e
u
r
o
n
,
a
"
n
e
u
r
o
n
"
f
o
r
w
h
i
c
h
w
e
c
a
n
p
r
e
d
i
c
t
a
c
t
i
v
a
t
i
o
n
v
a
l
u
e
s
f
o
r
a
n
y
g
i
v
e
n
t
e
x
t
e
x
c
e
r
p
t
.
T
o
s
c
o
r
e
a
n
e
x
p
l
a
n
a
t
i
o
n
,
w
e
w
a
n
t
t
o
c
o
m
p
a
r
e
t
h
i
s
s
i
m
u
l
a
t
e
d
n
e
u
r
o
n
a
g
a
i
n
s
t
t
h
e
r
e
a
l
n
e
u
r
o
n
f
o
r
w
h
i
c
h
t
h
e
e
x
p
l
a
n
a
t
i
o
n
w
a
s
g
e
n
e
r
a
t
e
d
.
T
h
a
t
i
s
,
w
e
w
a
n
t
t
o
c
o
m
p
a
r
e
t
w
o
l
i
s
t
s
o
f
v
a
l
u
e
s
:
t
h
e | Language models can explain neurons in language models |
performing the style mixing at all denoising timesteps, we
begin mixing at different starting points, such that starting
later in the denoising process should preserve more details
from the geometry concept. | A Neural Space-Time Representation for Text-to-Image Personalization |
• Barn
• Dog
• Tortoise Plushy
• Cat
• Teddybear
• Wooden Pot
All models are trained on the same training set and ini-
tialization token, when applicable. For a list of all 15 text
prompts considered in the evaluation protocol, please refer
to Table 2.
12
Real Sample
& Prompt
No Time Conditioning
No Space Conditioning
No Space nor
Time Conditioning
Both Space and
Time Conditioning
“A photo
of S∗”
“A photo
of S∗”
Figure 13. Additional results validating our space-time conditioning of NeTI. We train NeTI with and without our time and space con-
ditioning. All models are trained for the same number of optimization steps. As can be seen, the combination of both time and space is
essential for attaining high visual fidelity.
B. Storage Requirements | A Neural Space-Time Representation for Text-to-Image Personalization |
pipeline where such filtering is only the first stage. Subsequently, crowd workers filter the subset down using human judge-
ment and at the final stage expert in photography are employed to create the dataset. While effective, this process has several
drawbacks compared to Diffusion-DPO. First, necessitating training on existing data can be a bottleneck, both in terms of
scale and potential applications. While [9] reports lesser text faithfulness improvements as well, these are likely due to the
hand-written captions, a much more costly data collection stage than preferences. The Emu pipeline is not generalizable to
different types of feedback as DPO is (e.g. outside of recaptioning it is non-obvious how such an approach can improve
text-image alignment).
S2. Details of the Primary Derivation
Starting from Eq. (5), we have | DiffusionModelAlignmentUsing Direct Preference Optimization |
Figure 1: Example of response of Code Llama - Instruct (34B) when queried for a specific shell command.
• Infilling. Autoregressive training and fine-tuning of LLMs is suitable for prompt completion, but does
not provide the capability to fill a missing portion of text while taking the full surrounding context into
account. Our code-training for 7B and 13B Code Llama models features a multitask objective (Fried
et al., 2023) consisting of both autoregressive and causal infilling prediction, enabling applications such as
real-time completion in source code editors or docstring generation. Similarly to Bavarian et al. (2022);
Li et al. (2023), our ablation study shows that infilling capabilities come at low cost in code generation
performance for a given training compute budget (Section 3.2). | CodeLlama2 |
Fine-tuning Fine-tuning aims to adapt a pre-trained LLM to downstream tasks, by updating weights
with the available supervision, which usually forms a dataset orders of magnitude smaller than the
one used for pre-training (Devlin et al., 2018). T5 (Raffel et al., 2020) was among the first to frame
fine-tuning into a text-to-text unified framework, with natural language instructions describing each
task. Instruction-tuning later extended fine-tuning by training jointly on several tasks (Wei et al.,
2021a; Aribandi et al., 2021), each described with natural language instructions. Instruction-tuning
quickly gained in popularity, due to its ability to drastically improve zero-shot performance of LLMs,
including on new tasks (unseen during training), and especially at larger models scale. Standard
instruction-tuning with multi-task supervised fine-tuning (commonly known as SFT) may still not
result in models that follow humans intentions while being safe, ethical and harmless, and can be | ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup |
Anna Rumshisky, et al. Alexatm 20b: Few-shot learning using a large-scale multilingual seq2seq model. arXiv preprint arXiv:2208.01448, 2022.
[96] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya
Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint
arXiv:2206.04615, 2022.
[97] Ruixiang Tang, Yu-Neng Chuang, and Xia Hu. The science of detecting llm-generated texts. arXiv preprint arXiv:2303.07205, 2023.
[98] Ruixiang Tang, Mengnan Du, Yuening Li, Zirui Liu, Na Zou, and Xia Hu. Mitigating gender bias in captioning systems. In Proceedings of the Web
[99] Ruixiang Tang, Xiaotian Han, Xiaoqian Jiang, and Xia Hu. Does synthetic data generation of llms help clinical text mining? arXiv preprint
Conference 2021, pages 633–645, 2021.
arXiv:2303.04360, 2023. | Harnessing the Power of LLMs in Practice- A Survey on ChatGPT and Beyond |
In addition to modifying the training loss to improve localization, we can also augment
the data with this objective in mind by placing an object in multiple settings so that
resulting models extract the same features from an object irrespective of its location.
Instance Localization [Yang et al., 2021] leverages RoIAlign [He et al., 2017], an algorithm
designed for object detectors which extracts features corresponding to a specific image
patch. To this end, Instance Localization pastes a randomly chosen patch cut from the
foreground of one image onto two other images and extracts features corresponding to
only the pasted foreground patch, using a contrastive loss to ensure that the foreground
patch generates similar features regardless of the background present and regardless of
its location within an image. A competing approach estimates the location of an object
within the training image using saliency maps and then cuts and pastes these objects | A Cookbook of Self-Supervised Learning |
When confronted with complex and challenging mathematical problems, LLMs exhibit subpar
performance. Specifically, GPT-3 demonstrates nearly random performance, while GPT-3.5 shows
improvement, and GPT-4 performs the best [3]. Despite the advancements made in the new models,
it is important to note that the peak performance remains relatively low compared to that of experts
and these models lack the capability to engage in mathematical research [13]. The specific tasks of
algebraic manipulation and calculation continue to pose challenges for GPTs [13, 25]. The primary
reasons behind GPT-4’s low performance in these tasks are errors in algebraic manipulation and
difficulties in retrieving pertinent domain-specific concepts. Wu et al. [213] evaluated the use of
GPT-4 on difficult high school competition problems and GPT-4 reached 60% accuracy on half of
the categories. Intermediate algebra and precalculus can only be solved with a low accuracy rate | ASurveyonEvaluationofLargeLanguageModels |
In Section 6.1, we have introduced techniques for reducing the number of parame-
ters in an LLM for inference acceleration. These methods are general and agnostic
to input data, i.e., static for any given input sequence. However, there is another
line of methods that aims to improve the efficiency of LLM inference without reduc-
ing the number of parameters. Such methods typically are specific to different input
sequences and we term them as dynamic acceleration methods. In general, existing
dynamic acceleration methods include 3 categories, i.e., early exit, token pruning, and
token parallelism. Early exist accelerates model inference by terminating inference at
a particular layer-based on some criteria, i.e., making an LLM shallower. On the other
hand, token pruning accelerates inference by skipping some tokens for higher layers
based on their importance, i.e., making an LLM input shorter. Last, token parallelism | Beyond Efficiency |
for RLHF. Consequently, the reward mechanism swiftly learns to assign low scores to undesirable tail-end
distribution and aligns towards the human preference. This phenomena is illustrated in Figure 20, where we
can see that the worst answers are progressively removed, shifting the distribution to the right.
In addition, during annotation, the model has the potential to venture into writing trajectories that even the
best annotators may not chart. Nonetheless, humans can still provide valuable feedback when comparing two
answers, beyond their own writing competencies. Drawing a parallel, while we may not all be accomplished
artists, our ability to appreciate and critique art remains intact. We posit that the superior writing abilities of
LLMs, as manifested in surpassing human annotators in certain tasks, are fundamentally driven by RLHF, as
documented in Gilardi et al. (2023) and Huang et al. (2023). Supervised data may no longer be the gold | Llama2 |
We first briefly introduce the tools selected in experiments as follows:
Machine Translator. General-purpose language models may exhibit suboptimal proficiency when processing
text from multiple linguistic domains. Machine translators can effectively alleviate this issue by enabling
non-translation-dedicated language models to better comprehend multi-lingual texts. Following Toolformer, we
use NLLB (Costa-jussà et al., 2022) as our translator and choose MLQA (Lewis et al., 2020a), a multilingual
question answering benchmark, as the testbed. Given a context in English and a question in Arabian, the task
requires answering the question using English. We randomly sample 200 test instances from the original test
data. For the evaluation metric, we choose F1-score.
Calculator. Following the setting of Toolformer, we conduct experiments in which language models use
a simple calculator to solve math word problems. We choose a simple implementation for the calculator, | Tool Learning with Foundation Models |
3. Method
We propose IMavatar, an implicit morphable head avatar
that equips implicit surfaces with fine-grained expression
control by leveraging morphing-based deformation fields.
In this section, we first recap the deformation formulation
of the FLAME face model [35], followed by the representa-
tions for the canonical geometry, deformation, and texture
fields. Then, we introduce correspondence search to find
canonical points for image pixels and derive the analytical
gradients for end-to-end training.
3
3.1. Recap: FLAME Face Morphable Model
The FLAME face model [35] parameterizes facial geom-
etry with shape, pose, and expression components. Since
we focus on personal facial avatars, we specifically repre-
sent the pose- and expression-dependent shape variations.
The simplified FLAME mesh model is denoted by:
M (θ, ψ) = LBS(TP (θ, ψ), J(ψ), θ,W),
(1) | I M Avatar- Implicit Morphable Head Avatars from Videos |
arXiv preprint arXiv:2210.17323 (2022).
[80] Daniel Y Fu, Simran Arora, Jessica Grogan, Isys Johnson, Sabri Eyuboglu, Armin W Thomas, Benjamin Spector, Michael Poli, Atri Rudra, and Christopher
Ré. 2023. Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture. In NeurIPS.
[81] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In ICML. PMLR, 1183–1192.
[82] William A Gale, Kenneth W Church, and David Yarowsky. 1992. A method for disambiguating word senses in a large corpus. Computers and the Humanities
26 (1992), 415–439. | TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey |
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
arXiv:2204.02311.
Han Dai, Yi Zhang, Ziyu Gong, Nanqing Yang, Wei Dai,
Eric Song, and Qiankun Xie. 2021. Spatten: Efficient
sparse attention architecture with cascade token and
head pruning. In Advances in Neural Information
Processing Systems, volume 34.
Mingyu Gao, Jie Yu, Wentai Li, Michael C Dai,
Nam Sung Kim, and Krste Asanovic. 2022. com-
putedram: In-memory compute using off-the-shelf
dram. In Proceedings of the 27th ACM International
Conference on Architectural Support for Program-
ming Languages and Operating Systems, pages 1065–
1079.
Alex Graves. 2016. Adaptive computation time for re-
current neural networks. In International Conference
on Machine Learning, pages 3500–3509. PMLR. | LLM in a flash |
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza
Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hen-
nigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy,
Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre.
Training Compute-Optimal Large Language Models. arXiv:2203.15556 [cs], March 2022. URL
http://arxiv.org/abs/2203.15556.
Sara Hooker. The hardware lottery. Communications of the ACM, 64(12):58–65, November 2021.
ISSN 0001-0782. doi: 10.1145/3467017. URL https://doi.org/10.1145/3467017.
Le Hou, Richard Yuanzhe Pang, Tianyi Zhou, Yuexin Wu, Xinying Song, Xiaodan Song, and Denny
Zhou. Token Dropping for Efficient BERT Pretraining. arXiv:2203.13240 [cs], March 2022.
URL http://arxiv.org/abs/2203.13240.
Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc Le. Transformer Quality in Linear Time. | CRAMMING-TRAININGALANGUAGEMODELONA SINGLEGPUINONEDAY |
(
3
)
t
r
a
n
s
l
a
t
e
s
t
h
e
P
D
D
L
p
l
a
n
b
a
c
k
i
n
t
o
n
a
t
u
r
a
l
l
a
n
g
u
a
g
e
.
E
s
s
e
n
t
i
a
l
l
y
,
t
h
e
p
l
a
n
n
i
n
g
s
t
e
p
i
s
o
u
t
s
o
u
r
c
e
d
t
o
a
n
e
x
t
e
r
n
a
l
t
o
o
l
,
a
s
s
u
m
i
n
g
t
h
e
a
v
a
i
l
a
b
i
l
i
t
y
o
f
d
o
m
a
i
n
-
s
p
e
c
i
f
i
c
P
D
D
L
a
n
d
a
s
u
i
t
a
b
l
e
p
l
a
n
n
e
r
w
h
i
c
h
i
s
c
o
m
m
o
n
i
n
c
e
r
t
a
i
n
r
o
b
o
t
i
c
s
e
t
u
p
s
b
u
t
n
o
t
i
n
m
a
n
y
o
t
h
e
r
d
o
m
a
i
n
s
.
S
e
l
f
-
R
e
f
l
e
c
t
i
o
n
S
e
l
f
-
r
e
f
l
e
c
t
i
o
n
i
s
a
v
i
t
a
l
a
s
p
e
c
t
t
h
a
t
a
l
l
o
w
s
a
u
t
o
n
o
m
o
u
s
a
g
e
n
t
s
t
o
i
m
p
r
o
v
e
i
t
e
r
a
t
i
v
e
l
y
b
y
r
e
f
i
n
i
n
g
p
a
s
t
a
c
t
i
o
n
d
e
c
i
s
i
o
n
s
a
n
d
c
o
r
r
e
c
t
i
n
g
p
r
e
v
i
o
u
s
m
i
s
t
a
k
e
s
.
I
t
p
l
a
y
s
a
c
r
u
c
i
a
l
r
o
l
e
i
n
r
e
a
l
-
w
o
r
l
d
t
a
s
k
s
w
h
e
r
e
t
r
i
a
l
a
n
d
e
r
r
o
r | LLM Powered Autonomous Agents _ Lil'Log |
{(v = 0)}, g(a), {(v = 1)}, g(b), {(v = 2)} is a path in G2, but there is no path from {(v = 0)} to {(v = 3)} in G1. Hence, VDA
is neither PL↓ nor P2↓.
(6–7) For both RRAa and RRAb, if (cid:3)s, t, a(cid:4) ∈ E1, then t ∈ R2(s). Hence, RRAa and RRAb are P1↑ and, thus, PS↑ by
Theorem 21.
(8–9) Consider Example 39. There is a path {u, v}, a, {u, v} in G1, but {u, v} /∈ R2({u, v}). Hence, GIDL is not P1↑. If we
remove action b, then there is a path {u, v}, g(a), {u, v} in G2, but {u, v} /∈ R1({u, v}). Hence, GIDL is not P1↓ either.
(10) Let s0, a1, s1, a2, . . . , am, sm be a path in G1. Let t0 = s0. Then there is a path t0, g(a1), t1, g(a2), . . . , g(am), tm in G2
such that ti ∈ f (si) for all i. Hence, DLBS is PW↑.
(11) In Example 41 we have {v, vϕ} ∈ f (R1({v})), but {v, vϕ} /∈ R2( f ({v})). Hence, DLBS is not P↑. (cid:2)
Without transformation functions, the only reasonable modelling of method ABS would be to define f (s) = s, which is | A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen |
Training Factors
Carbon Footprint
Overview
Data Freshness
Model Details
Meta AI
Llama 2 comes in a range of parameter sizes—7B, 13B, and 70B—as well as
pretrained and fine-tuned variations.
Models input text only.
Models generate text only.
Llama 2 is an auto-regressive language model that uses an optimized transformer
architecture. The tuned versions use supervised fine-tuning (SFT) and reinforce-
ment learning with human feedback (RLHF) to align to human preferences for
helpfulness and safety.
Llama 2 was trained between January 2023 and July 2023.
This is a static model trained on an offline dataset. Future versions of the tuned
models will be released as we improve model safety with community feedback.
A custom commercial
ai.meta.com/resources/
models-and-libraries/llama-downloads/
Instructions on how to provide feedback or comments on the model can be
found in the model README, or by opening an issue in the GitHub repository
(https://github.com/facebookresearch/llama/). | Llama2 |
We note that all of the models above are entirely or partly trained on LibriSpeech.
Robust Speech Recognition via Large-Scale Weak Supervision
C. Text Standardization
Since Whisper may output any UTF-8 string rather than a restricted set of graphemes, the rules for text standardization need
to be more intricate and comprehensive than those defined on e.g. ASCII characters. We perform the following steps to
normalize English texts in different styles into a standardized form, which is a best-effort attempt to penalize only when a
word error is caused by actually mistranscribing a word, and not by formatting or punctuation differences.
21
1. Remove any phrases between matching brackets ([, ]).
2. Remove any phrases between matching parentheses ((, )).
3. Remove any of the following words: hmm, mm, mhm, mmm, uh, um
4. Remove whitespace characters that comes before an apostrophe ’
5. Convert standard or informal contracted forms of English into the original form. | RobustSpeechRecognitionviaLarge-ScaleWeakSupervision |
motivations; although misinformation is typically not designed to advance a
particular agenda, disinformation is often spread in service of concrete goals.
For instance, fake news is often designed to go viral on social media (Pennycook
and Rand 2018; Tandoc, Lim, and Ling 2018), enabling rapid transmission of
highly partisan content and offering a reliable stream of advertising revenue
(Tucker et al. 2018). In practice, however, determining a person or group’s
intentions is extremely difficult. It is hard to uncover people’s “ground truth”
beliefs about the veracity of a piece of information, and it is even harder to
ascertain their underlying motivations. That said, recognizing the range of
motivations
these
motivations are hard to disentangle in the wild. | Social_Media_and_Democracy |
To do so, we designed a self-reflection prompt that makes Mistral 7B classify a prompt or a generated
answer. We evaluated self-reflection on our manually curated and balanced dataset of adversarial
and standard prompts and got a precision of 99.4% for a recall of 95.6% (considering acceptable
prompts as positives).
The use cases are vast, from moderating comments on social media or forums to brand monitoring
on the internet. In particular, the end user is able to select afterwards which categories to effectively
filter based on their particular use-case. | Mistral7B |
2 The other major abstraction method is Hierarchical Task Networks (HTN), which originates from the Noah [79] and Nonlin [89] planners. It is based on
a hierarchy of methods that can be refined by predefined expansion patterns, and it is fundamentally different from state abstraction.
2
C. Bäckström and P. Jonsson
Artificial Intelligence 302 (2022) 103608
et al. [16]. However, ordered monotonicity is not sufficient to prevent heavy backtracking between levels [86], and even
when properties like the DRP guarantee that no such backtracking is needed, the abstraction hierarchy might still force the
generation of exponentially suboptimal ground solutions in the worst case [5].
1.1.2. Abstraction-based heuristics | A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen |
used in (Du et al., 2021; Chowdhery et al., 2022). We delve into understanding trade-offs between zero-shot
and finetuning performance and show that UL2 is Pareto-efficient with respect to both learning paradigms.
On one-shot summarization, UL2 triples the performance of an LM adapted T5 XXL model and is competitive
with (or outperforms) PaLM and LaMDA at the same compute cost. We release T5X-based Flax checkpoints
of the trained UL2 model. | UL2- Unifying Language Learning Paradigms |
’type’: ’literal’,
’value_or_uri’: ’Raw data for polymerization and intermediate products ...’}],
’distribution_dcat’:[{’accessURL_dcat’: [{’uri’: ’http://eplca.jrc.ec.europa.eu/ELCD3/’}],
’format_dcterms’: {’uri’: ’http://publications.europa.eu/resource/authority/file-type/ZIP’},
’license_dcterms’: [{’uri’: ’http://publications.europa.eu/resource/authority/licence/OP_DATPRO’}],
...}]}}]}}
VBELN,POSNR,MATNR,LFIMG,WADAT_IST
0000000000,000010,000000000000001111,100.000,20190909 | Knowledge-graph-based-rich-and-confidentiality-preserving-Ex_2022_Informatio |
• Support Vector Machines (SVMs): Support Vector Machines (SVMs) are a widely adopted
class of supervised learning algorithms extensively utilized for various speech classification
tasks [504]. They are particularly effective in domains like speaker recognition [174, 509, 510]
and phoneme recognition [52]. SVMs excel in their ability to identify optimal hyperplanes
that effectively separate different classes in the feature space. By leveraging this optimal
separation, SVMs enable accurate classification and recognition of speech patterns. As a
result, SVMs have become a fundamental tool in the field of speech analysis and play a vital
role in enhancing the performance of speech-related classification tasks.
• Hidden Markov Models (HMMs): Hidden Markov Models (HMMs) have gained significant
popularity as a powerful tool for performing various speech recognition tasks, particularly
ASR [149, 442]. In ASR, HMMs are employed to model the probability distribution of | AReviewofDeepLearningTechniquesforSpeechProcessing |
In our study, we examine three existing models: DiffSound by Yang et al. [38], AudioGen by Kreuk
et al. [16], and AudioLDM by Liu et al. [17]. AudioGen and DiffSound use text embeddings for con-
ditional generative training, while AudioLDM employs audio embeddings to avoid potential noise
from weak textual descriptions in the paired text-audio data. AudioLDM uses audio embeddings
from CLAP and asserts that they are effective in capturing cross-modal information. The models
were pre-trained on large datasets, including AudioSet, and fine-tuned on the AudioCaps dataset,
before evaluation, for enhanced performance. Thus, comparing them to our model TANGO would
not be entirely fair.
Despite being trained on a much smaller dataset, our model TANGO outperformed the baselines
that were trained on significantly larger datasets. We may largely attribute this to the use of LLM
FLAN-T5. Therefore, our model TANGO sets itself apart from the three existing models, making it | Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model |
©2023 Cerebras Systems Inc. All Rights Reserved.
22
Cerebras-GPT: Open Compute-Optimal Language Models
log-likelihood (NLL) (argmini(−ln(pi)/|ci|), where pi is the model’s predicted probability of continuation
sequence ci, and |ci| is the length of that sequence). This approach will tend to favor longer continuations
with moderate probability, which might be preferred for some tasks. For comparison against prior works
that report minimum length-normalized NLL, we report Cerebras-GPT results in Tables 10 and 10.
C.3 Differences Between Cerebras-GPT and Other Models | Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |
18
[37] H. Liu, D. Tam, M. Muqeeth, J. Mohta, T. Huang, M. Bansal, and C. A. Raffel. Few-shot
parameter-efficient fine-tuning is better and cheaper than in-context learning. Advances in
Neural Information Processing Systems, 35:1950–1965, 2022.
[38] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer,
and V. Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint
arXiv:1907.11692, 2019.
[39] S. Longpre, L. Hou, T. Vu, A. Webson, H. W. Chung, Y. Tay, D. Zhou, Q. V. Le, B. Zoph, J. Wei,
et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv
preprint arXiv:2301.13688, 2023.
[40] S. Min, M. Lewis, L. Zettlemoyer, and H. Hajishirzi. Metaicl: Learning to learn in context.
arXiv preprint arXiv:2110.15943, 2021. | QLORA |
5 Attention Patterns of Memory Operations
By examining the RMT attention on specific segments, as shown in Figure 6, we observe that
memory operations correspond to particular patterns in attention. Furthermore, the high extrapolation
performance on extremely long sequences, as presented in Section 5.2, demonstrates the effectiveness
of learned memory operations, even when used thousands of times. This is particularly impressive,
considering that these operations were not explicitly motivated by the task loss.
6 Related work | Scaling Transformer to 1M tokens and beyond with RMT |
[52] Ma, X., Zhou, C., Kong, X., He, J., Gui, L., Neubig, G., May, J., Zettlemoyer, L.:
43
Mega: Moving average equipped gated attention. In: The Eleventh International
Conference on Learning Representations (2023). https://openreview.net/forum?
id=qNLe3iq2El
[53] Peng, B., Alcaide, E., Anthony, Q., Albalak, A., Arcadinho, S., Cao, H., Cheng,
X., Chung, M., Grella, M., GV, K.K., et al.: Rwkv: Reinventing rnns for the
transformer era. arXiv preprint arXiv:2305.13048 (2023)
[54] Wang, X., Xiong, Y., Wei, Y., Wang, M., Li, L.: Lightseq: A high performance
inference library for transformers. NAACL-HLT 2021, 113 (2021)
[55] NVIDIA: FasterTransformer: A Faster Transformer Framework. https://github.
com/NVIDIA/FasterTransformer (2021) | Beyond Efficiency |
news agenda but are increasingly supplemented by platform companies serving
as secondary gatekeepers in terms of reaching a wide audience. This is a media
environment that challenges many established institutions, including news
media, gives technology companies more institutional and infrastructural
roles (and power), and in many ways empowers individual media users,
making democracy relatively more demotic and popular, even as many people
are also exposed to more abuse and harassment and many frequently use digital
media in ambiguous ways, including ways that challenge established norms and
values associated with liberal democracy. | Social_Media_and_Democracy |
Coarse-to-fine interpolation Figure 9 shows interpolations between a pair of source CelebA
256 × 256 images as we vary the number of diffusion steps prior to latent space interpolation.
Increasing the number of diffusion steps destroys more structure in the source images, which the
15
model completes during the reverse process. This allows us to interpolate at both fine granularities
and coarse granularities. In the limiting case of 0 diffusion steps, the interpolation mixes source
images in pixel space. On the other hand, after 1000 diffusion steps, source information is lost and
interpolations are novel samples.
Figure 9: Coarse-to-fine interpolations that vary the number of diffusion steps prior to latent mixing.
10
8
6
4
2
e
r
o
c
S
n
o
i
t
p
e
c
n
I
200
D
I
F
300
100
0
16
0
400
200
Reverse process steps (T − t)
600
800 1,000
0
400
200
Reverse process steps (T − t)
600
800 1,000
Figure 10: Unconditional CIFAR10 progressive sampling quality over time | Denoising Diffusion Probabilistic Models |
[39] William E. Lorensen and Harvey E. Cline. Marching cubes:
A high resolution 3D surface construction algorithm. Inter-
national Conference on Computer Graphics and Interactive
Techniques (SIGGRAPH), 21(4):163–169, 1987. 5
[40] Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, and
Michael J. Black. SCALE: Modeling clothed humans with
a surface codec of articulated local elements. In Computer
Vision and Pattern Recognition (CVPR), pages 16082–16093,
2021. 3
[41] Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Ger-
ard Pons-Moll, Siyu Tang, and Michael J. Black. Learning to
dress 3D people in generative clothing. In Computer Vision
and Pattern Recognition (CVPR), pages 6468–6477, 2020. 2,
5, 12
[42] Qianli Ma, Jinlong Yang, Siyu Tang, and Michael J. Black.
The power of points for modeling humans in clothing. In
International Conference on Computer Vision (ICCV), pages
10974–10984, 2021. 3 | ICON |
The abstract embedding technique prioritizes top K re-
trieval based on document abstracts (or summaries), offering
a comprehensive understanding of the entire document con-
text. Additionally, the metadata filtering technique leverages
document metadata to enhance the filtering process. An in-
novative approach, the graph indexing technique, transforms
entities and relationships into nodes and connections, sig-
nificantly improving relevance, particularly in the context of
multi-hop problems. | RAG forLargeLanguageModels-ASurvey |
While huge pretrained LMs often exhibit impressive diverse zero-shot performance, the practice of
massively multi-tasking an LM via fine tuning it simultaneously on many diverse NLP tasks has
been shown to dramatically improve performance across tasks and domains. For example, Sanh et al.
(2021) and Aribandi et al. (2021) fine tuned the 11B parameter T5 model (Raffel et al., 2019) on their
curated suites of 62 and 107 datasets, respectively, and present two new multi-tasked models called
T0 and EX-T5, respectively. Wei et al. (2021) fine tuned Google’s internal 137B parameter pretrained
LM on their curated suite of 60 datasets, producing a multi-tasked model called FLAN. Min et al.
(2021) fine tuned the 770M parameter GPT2 (Radford et al., 2019) on a curated suite of 142 datasets,
and Ouyang et al. (2022) fine tuned the 175B parameter GPT3 (Brown et al., 2020) on disparate
datasets of human instructions, using reinforcement learning from human feedback, producing a new | STANDING ON THE SHOULDERS OF GIANT FROZEN LANGUAGE MODELS |
One example is the setting of [1], where robots have to estimate the fraction of black tiles in a grid.
Each of the robots is very simple and performs a random walk. Whenever, two or more robots are
close to each other, they can communicate with each other. In the end, the robots have to agree on a
joint estimate of the fraction of black tiles. The arising questions here are:
• How much can multiple random walks speed up the process?
• How many samples have to be taken?
• What happens if the communication is noisy?
The goal is also to collaborate with researchers in the robotics community by modelling and
analyzing systems theoretically. In addition to a solid understanding of Markov chains, students
should be interested in collaborating with researchers across different disciplines. | informatics-phd-projects-2022-23 |
[552] Gilbert, N., J. Doran. Simulating Societies: The Computer Simulation of Social Phenomena.
Routledge Library Editions: Artificial Intelligence. Taylor & Francis, 2018.
[553] Hamilton, J. D. A new approach to the economic analysis of nonstationary time series and the
business cycle. Econometrica: Journal of the econometric society, pages 357–384, 1989.
[554] Zhang, G. P. Time series forecasting using a hybrid ARIMA and neural network model.
Neurocomputing, 50:159–175, 2003.
[555] Kirby, S., M. Dowman, T. L. Griffiths. Innateness and culture in the evolution of language.
Proceedings of the National Academy of Sciences, 104(12):5241–5245, 2007.
79
[556] Shibata, H., S. Miki, Y. Nakamura. Playing the werewolf game with artificial intelligence for
language understanding. CoRR, abs/2302.10646, 2023.
[557] Junprung, E. Exploring the intersection of large language models and agent-based modeling
via prompt engineering. CoRR, abs/2308.07411, 2023. | TheRiseandPotentialofLargeLanguageModel BasedAgents |
rization Across Neural Language Models,” Mar. 2023.
[60] D. Ganguli, D. Hernandez, L. Lovitt, N. DasSarma, T. Henighan, A. Jones, N. Joseph,
J. Kernion, B. Mann, A. Askell, Y. Bai, A. Chen, T. Conerly, D. Drain, N. Elhage, S. E. Showk,
S. Fort, Z. Hatfield-Dodds, S. Johnston, S. Kravec, N. Nanda, K. Ndousse, C. Olsson, D. Amodei,
D. Amodei, T. Brown, J. Kaplan, S. McCandlish, C. Olah, and J. Clark, “Predictability and
Surprise in Large Generative Models,” in 2022 ACM Conference on Fairness, Accountability,
and Transparency, pp. 1747–1764, June 2022.
[61] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma,
D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus,
“Emergent Abilities of Large Language Models,” Oct. 2022.
[62] R. Ngo, L. Chan, and S. Mindermann, “The alignment problem from a deep learning perspec-
tive,” Feb. 2023.
Press, Sept. 2014. | gpt-4-system-card |
Language model pre-training has been shown to
capture a surprising amount of world knowledge,
crucial for NLP tasks such as question answering.
However, this knowledge is stored implicitly in
the parameters of a neural network, requiring ever-
larger networks to cover more facts.
To capture knowledge in a more modular and in-
terpretable way, we augment language model pre-
training with a latent knowledge retriever, which
allows the model to retrieve and attend over docu-
ments from a large corpus such as Wikipedia, used
during pre-training, fine-tuning and inference. For
the first time, we show how to pre-train such a
knowledge retriever in an unsupervised manner,
using masked language modeling as the learning
signal and backpropagating through a retrieval
step that considers millions of documents.
We demonstrate the effectiveness of Retrieval-
Augmented Language Model
pre-training
(REALM) by fine-tuning on the challenging task
of Open-domain Question Answering (Open-QA). | REALM |
tasks. We decompose the problem into two components including offline and online stages. In the
offline stage, MLCopilot canonicalizes historical data and creates an experience pool. LLMs are
then used to extract valuable knowledge from historical experience. In the online stage, MLCopilot
retrieves experiences from the most relevant tasks from the experience pool, given the description of
the target task. It then interacts with LLMs to obtain multiple suggested ML solutions in one round.
We demonstrate that, with a carefully designed framework, LLMs can not only elicit meaningful
knowledge from historical experiences but also provide reasonable and competitive ML solutions for
novel tasks.
Our work presents a three-fold contribution, which can be summarized as follows. (i) To the best of
our knowledge, we are the first to utilize LLMs as a tool to generate solutions for novel ML tasks. (ii)
A novel retrieve-and-prompt framework has been proposed to solve ML tasks almost instantaneously, | MLCopilot- Unleashing the Power of Large Language Models in Solving Machine Learning Tasks |
Empowering employees and delivery service partners
In addition to its focus on customers, Amazon strives to make every day better for its employees and delivery service partners.
For example, the company: | AMZN-Q3-2023-Earnings-Release |
22
Fig. 3: Convergent Pearson’s correlations between IPIP-NEO and BFI scores by model.
Bar chart illustrates the similarities (i.e., convergence) between IPIP-NEO and BFI score
variation for each Big Five domain. Stronger correlations indicate higher levels of convergence
and provide evidence for convergent validity. EXT = extraversion; AGR = agreeableness;
CON = conscientiousness; NEU = neuroticism; OPE = openness.
External Validation Results
Convergent and Discriminant Validation Results. The external validity of personality
in LLMs—in terms of convergent and discriminant validity—varies across two axes: model
size and model training method. Figure 3 illustrates convergent validity in terms of how IPIP-
NEO and BFI scores convergently correlate across models. Table 8 summarizes the average
convergent and discriminant rs across models. | PersonalityTraitsinLargeLanguageModels |
Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand
Sivaprasad, Chiachun Hsieh, Nazneen Fatema Ra-
jani, Xiangru Tang, Aadit Vyas, Neha Verma,
Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto,
Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Murori
Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu,
Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, and
Richard Socher. 2020. Dart: Open-domain struc-
tured data record to text generation.
volume 108 of Proceedings of Machine Learning Re-
search, pages 2435–2443, Online. PMLR.
Colin Raffel, Noam Shazeer, Adam Roberts, Kather-
ine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J. Liu. 2020. Exploring
the limits of transfer learning with a unified text-to-
text transformer. Journal of Machine Learning Re-
search, 21(140):1–67. | Prefix-Tuning |
the generation of unexpected, irrelevant, or coun-
terfactual output (Zhang et al., 2023c). Several
works in hallucination trace down the occurrence
of hallucination to the lack of pertinent knowledge
and the internalization of false knowledge from
the pretraining corpora (Li et al., 2022; McKenna
et al., 2023; Dziri et al., 2022). To mitigate hal-
lucination, the curation of pretraining corpora is
adopted by many LLMs mainly focusing on the
extracting of high-quality data, e.g., GPT-3 (Brown
et al., 2020), Llama 2 (Touvron et al., 2023b), and
Falcon (Penedo et al., 2023). The manually cu-
rated (Zhou et al., 2023a) and automatically se-
lected (Chen et al., 2023c; Cao et al., 2023; Lee
et al., 2023b) high-quality instruction data are also
experimentally shown to be effective in reducing
hallucination during the SFT stage. It can be seen
from the previous research that data management
in both the pretraining and SFT stages can be a
promising solution to hallucination. | DataManagementForLargeLanguageModels-ASurvey |
Is there any social principle for llm-based agents? CoRR,
abs/2308.11136, 2023.
[658] Baum, S. A survey of artificial general intelligence projects for ethics, risk, and policy. Global
Catastrophic Risk Institute Working Paper, pages 17–1, 2017.
[659] Lecun, Y. https://twitter.com/ylecun/status/1625127902890151943.
[660] Zhao, S. Can Large Language Models Lead to Artificial General Intelligence?
[661] Brandes, N. Language Models are a Potentially Safe Path to Human-Level AGI.
[662] Zocca, V. How far are we from AGI?
[663] Ilya Sutskever, L. F. Ilya Sutskever: Deep Learning | Lex Fridman Podcast #94.
[664] Lecun, Y. https://twitter.com/ylecun/status/1640063227903213568.
[665] LeCun, Y. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open
Review, 62, 2022. | TheRiseandPotentialofLargeLanguageModel BasedAgents |
§
DeepMind's Atari game system, DQN, for example, almost entirely lacks explicit
cognitive models. When DQN learned to play Breakout it did not abstract individual
board positions into scene graphs representing the location and extent of individual
bricks; there was no direct representation of where the paddle is, the velocity of the ball,
or the underlying physics of the game, nor any abstract realization of the ricochet
dynamic that makes the game so engaging. In the language of reinforcement learning,
the system was model-free.15 Yet superhuman achievement was achieved. (Remarkably,
in some games, like Pong, that are strictly deterministic with known starting conditions,
successful play can be achieved without looking at the screen at all (Koul, Greydanus,
Fern - arXiv preprint arXiv:1811.12530, & 2018, ). | The Next Decade in AI- |
supervised learning [Tarvainen and Valpola, 2017], and even model average in supervised
and generative modeling [Jean et al., 2014]. | A Cookbook of Self-Supervised Learning |
˜fsigmoid(x) =
1
α
α to the stretched sigmoid:
fsigmoid(αx), α ∈ [0, 1].
(7)
We refer to ˜fsigmoid as the scaled-sigmoid, which is visu-
alized in Figure 6 (right). Since ˜fsigmoid can surpass the
[0, 1] bounds, we employ an annealing strategy: initializing
α with a small value (0.5 in our experiment) to accelerate
training; over time, we gradually increase α to 1, ensuring
the output albedo lies in [0, 1].
3.4. Adaptive Perp-Neg Algorithm
Previous Perp-Neg Algorithm. Existing text-to-3D algo-
rithms often face a challenging Janus (multi-head) prob-
lem [18]. Rather than generating a coherent 3D output, the
learned 3D object tends to exhibit repeated front views at
different angles, as the front view has been more prominent
in the training data of text-to-image models. For instance,
when generating a 3D animal, the resulting output often has
multiple faces without capturing the natural side and back
views of the animal.
To address this issue, [1] propose a Perp-Neg algorithm: | Instant3D |
0.0
33.3 22.2 36.4
18.2 27.3 20.0 16.0
77.8 55.6 18.2 18.2 63.6 63.6 84.0 68.0
9.1
27.8 27.8 27.3
18.2 27.3 28.0 20.0
72.2 66.7 45.5 18.2 54.5 72.7 84.0 84.0
0.0
22.2 44.4
54.5 45.5 20.0 60.0
66.7 77.8 27.3 27.3 72.7 45.5 72.0 76.0
0.0
9.1
16.7 22.2 18.2 18.2 18.2 36.4 32.0 24.0
61.1 72.2 45.5 45.5 81.8 36.4 72.0 68.0
61.1 66.7 45.5 18.2 72.7 81.8 84.0 80.0
61.1 66.7 27.3 36.4 81.8 81.8 72.0 72.0
83.3 66.7 27.3 27.3 81.8 81.8 84.0 84.0
72.2 66.7 45.5 54.5 81.8 90.9 84.0 84.0
27.8
8.0
55.6 50.0 18.2 45.5 45.5 54.5 68.0 56.0
54.5 27.3 32.0
27.3
9.1
0.0
22.2 27.8 27.3 18.2 18.2 27.3 32.0 16.0
66.7 50.0
18.2 72.7 72.7 80.0 76.0
9.1 | Mixture-of-Experts |
To evaluate the agents, we ask crowdworkers to have dialogs with each of the two LaMDA and the two PT instances,
producing 600 dialog turns in total. In addition, we ask another set of crowdworkers to label each of the generated
responses in their original context according to whether they are role-consistent and helpful (defined in Section 4.2)
relative to their target roles. Each response is labeled three times by different crowdworkers. All the crowdworkers are
provided with the role definitions that are listed in Table 2 to understand what to expect from each agent.
LaMDA applications perform significantly better than PT applications in Helpfulness as shown quantitatively in Table 5
and qualitatively in Table 6. Although the reasons for PT losses vary, the most common error patterns could be attributed
14
to PT’s lower performance on foundation metrics such as safety, groundedness and quality (foundation metrics are
shown in Figure 4). | LaMDA- Language Models for Dialog Applications |
conclusions and steps for future research
As online hate speech has become increasingly visible on social media platforms,
it has emerged at the center of academic, legal, and policy agendas. Despite
increased attention to online hate speech, as this chapter demonstrates, the
debate over how to define online hate speech is far from settled. Partly as
a consequence of these definitional challenges, and partly as a result of the
highly context-specific and evolving nature of online hate speech, detecting
hateful content systematically is an extremely difficult task.
https://doi.org/10.1017/9781108890960 Published online by Cambridge University Press
76
Alexandra A. Siegel | Social_Media_and_Democracy |
for efficient and robust semi-supervised learning.
Systems (NeurIPS), 2021c.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International
Conference on Learning Representations (ICLR), 2015.
14
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Al-
berti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N.
Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav
In Association for
Petrov. Natural questions: A benchmark for question answering research.
Computational Linguistics (ACL), 2019.
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the
carbon emissions of machine learning. arXiv preprint arXiv:1910.09700, 2019. | DoReMi- Optimizing Data Mixtures Speeds Up Language Model Pretraining |
3.1. Preliminaries
Aligning specific pairs of modalities. Contrastive learn-
ing [27] is a general technique for learning an embedding
space by using pairs of related examples (positives) and un-
related examples (negatives). Using pairs of aligned ob-
servations, contrastive learning can align pairs of modal-
ities such as (image, text) [59], (audio, text) [26], (image,
depth) [68], (video, audio) [49] etc. However, in each case,
the joint embeddings are trained and evaluated using the
same pairs of modalities. Thus, (video, audio) embeddings
are not directly applicable for text-based tasks while (image,
text) embeddings cannot be applied for audio tasks.
Zero-shot
image classification using text prompts.
CLIP [59] popularized a ‘zero-shot’ classification task
based on an aligned (image, text) embedding space. This
involves constructing a list of text descriptions that describe
the classes in a dataset. An input image is classified based | IMAGEBIND- One Embedding Space To Bind Them A |
Table 5: Percentage of helpful and persona-consistent messages from each agent.
Helpful % Role Consistent %
LaMDA Everest
PT Everest
LaMDA Music
PT Music
65
18
57
31
91
85
89
84
Table 6: Examples of domain-specific losses for PT responses when compared to LaMDA responses that could be
due to their different performance in foundation metrics (safety, groundedness and quality). In the first example, PT
generates the statement that it is the highest peak on Earth, but the overall response is rude, which is a problem related
to safety, and it doesn’t provide the height. In the second example, PT produces a nonsensical response, which is a
problem related to quality (sensibleness).
Application Context
Everest
How tall are you
PT response
Well, I’m the highest peak on Earth.
Do you think I’m just going to give
you my vitals or something? You
are probably not worthy of such in-
formation... yet. | LaMDA- Language Models for Dialog Applications |
character in the NFKC-normalized string starts with M, S, or P.
Additionally, we put a space between every letter for the languages that do not use spaces to separate words, namely Chinese,
Japanese, Thai, Lao, and Burmese, effectively measuring the character error rate instead.
We note that the above is an imperfect solution, and it will sometimes produce unintended and unexpected outputs. We do
not claim that the text format resulting from the above is more “correct” in any measure. Rather, the procedures above are
designed to better distinguish between innocuous differences in wording and genuine mistranscriptions. Python code for
the standardization procedures above is available as part of our code and model release to facilitate future iterations and
improvements on text standardization.
Robust Speech Recognition via Large-Scale Weak Supervision
D. Raw Performance Tables
D.1. English Transcription
D.1.1. GREEDY DECODING
22
Model | RobustSpeechRecognitionviaLarge-ScaleWeakSupervision |
Model
Dense
Dense
Sparse
Sparse
(cid:88)
(cid:88)
84.9 ± 0.33
85.1 ± 0.25
86.6 ± 0.18
86.6 ± 0.24
GEC (↑)
22.3 ± 0.25
22.1 ± 0.42
22.2 ± 0.04
22.9 ± 0.09
Table 7: Impact of sentinel tokens for fine-tuning. The addition of sentinel tokens (a similar
concept used in Lester et al. (2021)) during fine-tuning has mixed performance on the two tasks we
consider. SuperGLUE records the average score and GEC records the exact match. While we find it
doesn’t improve generalization, sentinel tokens can accelerate training convergence.
5 DESIGNING SPARSE MODELS | ST-MOE- DESIGNING STABLE AND TRANSFERABLE SPARSE EXPERT MODELS |
Fast Attention Calculation. In the realm of fast attention, researchers are developing innovative strategies to enhance
efficiency. A primary focus is on attention factorization, which aims to reduce attention calculations that are often unnecessary
in certain contexts. This technique is particularly useful when dealing with lengthy sequential inputs, where direct pairwise
attention computations become computationally intensive. By employing attention factorization, computational demands can
be significantly reduced, transforming 2-D computations into more manageable 1-D formats [9, 51, 179, 313]. Furthermore,
these factorized attention methods are designed to discern and emphasize the attention differences between closely positioned
tokens and their respective changes over time. This nuanced approach ensures that the computational resources are dedicated
to the most impactful elements of the data. Another innovative method involves the use of frequency-based techniques, such as | TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey |
From a mathematical standpoint, let sagg(k) de-
note the cumulative use of neuron data across a
sequence of k input tokens. Our memory architec-
ture is designed to store an average of sagg(k) in
Dynamic Random-Access Memory (DRAM). As
we process each new token, the incremental neu-
ron data, which is mathematically represented as
sagg(k + 1) − sagg(k), is loaded from flash mem-
ory into DRAM. This practice is grounded in the
observed trend of decreasing aggregated neuron
usage over time. Consequently, larger values of k
result in a lesser volume of data being loaded for
each new token. (refer to Figure 4a) This reduction
in data loading is counterbalanced by the memory
cost associated with storing sagg(k). In determin-
ing the size of the sliding window, the aim is to
maximize it within the constraints imposed by the
available memory capacity.
4 | LLM in a flash |
We evaluate all models in JAX on TPU v4-8 with greedy decoding unless specified otherwise. We
normalise text using the Whisper English normaliser (Radford et al., 2022), which standardises text
by removing or converting specific words, symbols, numeric expressions, and managing whitespace
and spellings, in an attempt to only penalise a system when an error is caused by actually mistran-
scribing a word, and not by formatting differences. We measure transcription accuracy using the
WER metric.
During training, we evaluate the intermediate checkpoints every 5k training steps on the 13 vali-
dation sets. We select the checkpoint with the best macro-average performance over the validation
splits for final evaluation on the test splits.
For latency measurements, we evaluate the models in PyTorch (Paszke et al., 2019) using a single
A100 40GB GPU in float16 precision. Specifically, we measure the total time taken to decode 256 | DISTIL-WHISPER |
What are the unintended consequences of potential | Social_Media_and_Democracy |
•
Continued to expand AWS’s infrastructure footprint to support customers by launching the AWS Israel (Tel Aviv)
Region and a new AWS Local Zone in Phoenix, Arizona. The AWS Israel (Tel Aviv) Region is estimated to support
an average of 7,700 full-time equivalent jobs annually through a planned investment of $7.2 billion through 2037.
Inventing on behalf of customers
Amazon is driven by a passion for invention across all of its business areas. The company builds new products and services that
customers ask for, and also invents new ones that customers didn’t know they wanted but make their lives or businesses better
in some meaningful way. For example, Amazon:
•
•
•
•
•
•
•
•
•
•
•
• | AMZN-Q3-2023-Earnings-Release |
2 Related Work
Communicative Agents. Communication between agents has been studied for a long time [44, 45].
There are many ways to facilitate communication between agents, and with agents [19, 53, 57].
Among these, natural language is considered the most natural form of communication [57]. By
enabling agents to function as communicators themselves, they become capable of solving complex
tasks [65, 49, 42, 1]. Communication between AI agents can occur in a competitive setting [67, 62]
or a cooperative setting [26, 18, 8]. Cooperative AI refers to artificial intelligence systems that
are designed to work together with humans and other AI systems to achieve common goals [16].
Cooperative AI systems take into account the needs and capabilities of other agents in the system
2 | CAMEL- Communicative Agents for “Mind” Exploration of Large Scale Language Model Society |
ertainsize(Weietal.,2022b).Inparticular,foundationmodelswithtensorhundredsofbillionsofparameterscangenerateintermediatereasoningtracesduringcomplexproblem-solving,whichsignificantlybooststheirzero-shotandfew-shotperformances(Nakanoetal.,2021;Nyeetal.,2021;Weietal.,2022b,interalia).ThereasoningabilitythatemergesinthefoundationmodelsseemstoshiftthemodelsfromSystem1toSystem2(Kahneman,2011),makingitpossibletoaccomplishmorecomplextasks.ElicitingReasoninginFoundationModels.Despitetheextensivestudyoftheconceptofreasoninginthepsychologyliterature(Wason,1968;Kelley,2013),thenotionofreasoningasappliedtofoundationmodelsisnotclearlydefined.However,ingeneralterms,thereasoningabilityintheliteratureoffoundationmodelscanbeframedasthecapacitytodecomposeacomplexproblemintosub-problemsandsolvethem173.2 | Tool Learning with Foundation Models |
3.2
INTRINSIC SELF-CORRECTION
Per the discussions in Section 3.1.3, since the idea that LLMs can self-correct their reasoning is
not supported by the evidence so far, we turn our focus to the results in the intrinsic self-correction
2For GSM8K, a similar random baseline might not exist, but the underlying rationale remains the same.
Additionally, we can design a baseline, for example, by generating a random number each time. After a
significant number of rounds, it may reach the correct answer, but such a kind of improvement is apparently
not meaningful. A more direct justification is: If we already know the answer, why do we need to do this?
4
Large Language Models Cannot Self-Correct Reasoning Yet
Table 3: Results of GPT-3.5 and GPT-4 on reasoning benchmarks with intrinsic self-correction.
# calls GSM8K CommonSenseQA HotpotQA
GPT-3.5
GPT-4
Standard Prompting
Self-Correct (round 1)
Self-Correct (round 2)
Standard Prompting
Self-Correct (round 1)
Self-Correct (round 2) | LARGELANGUAGEMODELSCANNOTSELF-CORRECT REASONINGYET |
the pose, size, color etc. of those objects. Then, the MLP
φstate maps s into the language embedding space.
Vision Transformer (ViT). ViT ˜φViT (Dosovitskiy et al.,
2020) is a transformer architecture mapping an image I
into a number of token embeddings ˜x1:m = ˜φViT(I) ∈
Rmטk. We consider several variants, including the 4 billion
parameter model from Chen et al. (2022), which we refer to
as ViT-4B, and a similar 22 billion parameter model, ViT-
22B (Dehghani et al., 2023), both of which have been pre-
trained on image classification. We further investigate the
ViT token learner architecture (ViT + TL) (Ryoo et al., 2021)
which is trained end-to-end from scratch. Note that the
dimensionality ˜k of the ViT embeddings is not necessarily
the same as that of the language model. We therefore project
each embedding into xi = φViT(I)i = ψ( ˜φViT(I)i) with ψ
being a learned affine transformation.
Object-centric representations. Unlike language, visual | PaLM-E- An Embodied Multimodal Language Model |
J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah. Signature verification using a"
siamese" time delay neural network. Advances in neural information processing systems,
6, 1993. 7, 10
47
T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan,
P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan,
R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler,
M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever,
and D. Amodei. Language models are few-shot learners, 2020. URL https://arxiv.
org/abs/2005.14165. 3, 39
M. Caron, P. Bojanowski, A. Joulin, and M. Douze. Deep clustering for unsupervised
learning of visual features. In Proceedings of the European conference on computer vision
(ECCV), pages 132–149, 2018. 6, 12, 44 | A Cookbook of Self-Supervised Learning |
Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198, 2018.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke
Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint
arXiv:1907.11692, 2019.
Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Unicorn on rainbow: A universal
commonsense reasoning model on a new multitask benchmark. arXiv preprint arXiv:2103.13009, 2021.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for
Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, June 2011.
Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015. | UL2- Unifying Language Learning Paradigms |
Property DLBS is the only one of the methods in Section 6 that is not transitive. The following example illustrates why.
Example 67. Let M1 be the landmarks for τ1 and M2 the landmarks for τ2. Let v be a variable in V 1 and let ϕ1 = {(v = 0)} ∈
M1 be a landmark on V 1. Then V 2 contains both v and a variable vϕ1 for the landmark (the actual name of the variable
= 1). Since M2
is not important) and every action in A2 that contains (v = 0) in its postcondition will also contain (V ϕ1
= 1)}, which is a landmark for the landmark
is a set of landmarks on V 2, it is allowed to contain the landmark ϕ2 = {(vϕ1
ϕ1. Then V 3 contains v, vϕ1 and vϕ2 and every action in A3 with (v = 0) in its postcondition will also contain (vϕ1
= 1)
= 1) in its postcondition. Now consider the composite transformation τ3 = τ1◦τ2, which is a direct transformation
and (vϕ2
from F1 to F3. Let M3 be the landmark set for τ3. By definition, M3 can only contain landmarks on V 1 so it can contain | A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen |
[202] Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021. Codet5: Identifier-aware unified pre-trained encoder-
decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859 (2021).
[203] Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei Ye, Rui Xie, Xing Xie, and Shikun Zhang.
2023. Exploring Vision-Language Models for Imbalanced Learning. arXiv preprint arXiv:2304.01457 (2023).
[204] Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong
Wang, Xing Xie, et al. 2023. PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization.
arXiv preprint arXiv:2306.05087 (2023).
[205] Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei Zhang, Liling Dong, Jing
Gao, et al. 2023. Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today.
arXiv preprint arXiv:2306.01499 (2023). | ASurveyonEvaluationofLargeLanguageModels |
while successfully handling a wide range of diverse tasks.
We follow OFA (Wang et al., 2022b) to design BiomedGPT, which takes BART (Lewis et al., 2019) as the
backbone that is implemented as a sequence-to-sequence model with a BERT-style encoder over corrupted
text and a GPT-style left-to-right autoregressive decoder. We make a few architectural changes to adapt
the BART architecture for BiomedGPT. First, to improve the convergence efficiency and stability in the
pretraining, we add three normalization operations to each layer: a post-attention Layer Norm (LN) (Ba
et al., 2016), post-first-FFN LN, and head-wise scaling within self-attention, following (Shleifer et al., 2021).
To encode positional information, we incorporate two sets of absolute position embeddings for both text and
images. Rather than merely combining these embeddings with token and patch embeddings, we implement
a decoupling method to separate position correlation (Kitaev & Klein, 2018; Ke et al., 2019). Furthermore, | BiomedGPT |
MQA, we increase the dimension of the feed-forward layers to compensate for the reduction in the attention
layers. For the MQA variant, we increase the FFN dimension by a factor of 1.33, and for the GQA variant, we
increase it by a factor of 1.3. From the results, we observe that the GQA variant performs comparably to the
MHA baseline on most evaluation tasks and is better than the MQA variant on average.
To optimize for latency, we host our largest models using 8 A100s in a single node with tensor parallelism
(Shoeybi et al., 2019). In this setting, sharding for MQA cannot be done across heads anymore, given the
number of heads is lower than the number of GPUs. Either you duplicate the KV values in all GPUs (making
the KV cache size equal to GQA), or an alternative is to shard across the batch dimension instead (Pope et al.,
2022). The latter, however, can complicate an inference service, as it works only when batch sizes are larger | Llama2 |
AnsAug Rephrasing SV FOBAR Overall
MetaMathQA-GSM8K 80K
75K
MetaMathQA-MATH
155K
MetaMathQA
Table 2: Number of samples in the proposed MetaMathQA. | METAMATH |
interestingness, safety, and groundedness. An advantage of using several different metrics is their debuggability: by
exploring responses with low safety or groundedness scores, we have been able to develop targeted methods to improve
them. | LaMDA- Language Models for Dialog Applications |
instruction fine-tuned models; and 3) personality in LLM outputs can be shaped
along desired dimensions to mimic specific personality profiles. We also discuss
potential applications and ethical implications of our measurement and shaping
framework, especially regarding responsible use of LLMs. | PersonalityTraitsinLargeLanguageModels |
jobs
Dolly 2.0 generates content for a tweet
Instruction
Write me a tweet about the launch of Dolly 2.0, our
new LLM.
We've upgraded our LLM, making it more ef | Dolly 2 Databricks |
2 Background
In this chapter, we will introduce the definition of RAG, as
well as the comparison between RAG and other model opti-
mization techniques, such as fine-tuning.
2.1 Definition
The meaning of RAG has expanded in tandem with techno-
logical developments.
In the era of Large Language Mod-
els, the specific definition of RAG refers to the model, when
answering questions or generating text, first retrieving rele-
vant information from a vast corpus of documents. Subse-
quently, it utilizes this retrieved information to generate re-
sponses or text, thereby enhancing the quality of predictions.
The RAG method allows developers to avoid the need for
retraining the entire large model for each specific task. In-
stead, they can attach a knowledge base, providing additional
information input to the model and improving the accuracy
of its responses. RAG methods are particularly well-suited
for knowledge-intensive tasks. In summary, the RAG system
consists of two key stages: | Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey |
5.2.2 Code Translation
For generating initial Python translation, we apply the same few-shot prompt for TransCoder as [13],
which consists of 3 exemplars (Appendix B.1). From Figure 7a, we again observe that the major
improvement comes from the first debugging turn. Specifically, a single debugging turn with the
full feedback improves over the greedy decoding accuracy by around 12%. Compared to Figure 7b,
applying SELF-DEBUGGING to greedy decoding outperforms the baseline accuracy with 5 samples,
and is close to the baseline accuracy with 10 samples.
Meanwhile, incorporating both unit test execution and code explanation improves the debugging
performance, and we present some examples in Figures 9 and 10. In addition, we demonstrate that
leveraging code explanation alone without SELF-DEBUGGING also provides a consistent performance
gain of 2 − 3% for different numbers of samples, as shown in Figure 7b.
9
(a)
(b) | Teaching Large Language Models to Self-Debug |
Figure 3 and Table 2 we see that NF4 improves per-
formance significantly over FP4 and Int4 and that
double quantization reduces the memory footprint
without degrading performance.
k-bit QLORA matches 16-bit full finetuning and
16-bit LoRA performance Recent findings have
established that 4-bit quantization for inference is | QLORA |
• Navigation. Navigation permits agents to dynamically alter their positions within the environ-
ment, which often involves multi-angle and multi-object observations, as well as long-horizon
manipulations based on current exploration [23]. Before navigation, it is essential for embodied
agents to establish prior internal maps about the external environment, which are typically in the
form of a topological map, semantic map or occupancy map [358]. For example, LM-Nav [335]
utilizes the VNM [379] to create an internal topological map. It further leverages the LLM and
VLM for decomposing input commands and analyzing the environment to find the optimal path.
Furthermore, some [380; 381] highlight the importance of spatial representation to achieve the
precise localization of spatial targets rather than conventional point or object-centric navigation
actions by leveraging the pre-trained VLM model to combine visual features from images with 3D | TheRiseandPotentialofLargeLanguageModel BasedAgents |
3.3 Survey #1
In the next stage of our scale development process, we designed a Qualtrics-based online survey to collect data
from participants and conducted an exploratory factor analysis and item reduction. Boateng et al. [10], referring
to Comrey [17], recommends a sample size of a minimum of 200 participants for studies of this kind and we
exceeded this minimal sample size recommendation with a sample size of 𝑛 = 302 participants.
3.3.1 Participants. The sample was composed of 149 female and 153 male participants with a mean age of
44.4 years (𝑆𝐷=13.0). No participants chose not to reveal their identity, and no participant self-identified as
non-binary or other. Participants were recruited through the UK-based platform Prolific, with the sample being
drawn from the United Kingdom and the United States. All participants reported English as being their mother
tongue. Participation was voluntary and compensated by 9 GBP per hour. The participants were informed that | Society’sAttitudesTowardsHumanAugmentation |
In this way, the latent space for music can serve
as the starting point for our text-to-music genera-
tor, which will be introduced next. To ensure this
representation space fits the next stage, we apply a
tanh function on the bottleneck, keeping the val-
ues in the range [−1, 1]. Note that we do not use
a more disentangled bottleneck, such as the one
in VAEs (Kingma and Welling, 2014), as its addi-
tional regularization reduces the amount of allowed
compressibility.
3.2 Stage 2: Text-to-Music Generation by
Text-Conditioned Latent Diffusion
(TCLD)
Based on the learned music representation space,
in this stage, we guide the music generation with
text descriptions.
Overview As shown in Figure 4, we propose a
text-conditioned latent diffusion (TCLD) process.
Specifically, we first corrupt the latent space of
music with a random amount of noise, then train a
series of U-Nets to remove the noise, and condition
UNet||·||NoiseText
EmbeddingEmbeddingTransformer
Latent4 Experimental Setup | Moûsai |
and 15 hours a week as a coach, then she works 50 x 35 = 1750 hours as a teacher and 15 x 30 = 450 hours as a
coach. So she works 1750 + 450 = 2200 hours. She gets paid 20 dollars per hour for 1750 hours and 30 dollars
per hour for 450 hours. So her annual salary is 20 x 1750 + 30 x 450 = $36,500. The answer is $36,500. (cid:55)
EXPLANATION FOR ERROR CATEGORY: The model incorrectly mapped “15 x 30” instead of “15 x 50”, and
the chain of thought would be made correct by fixing just that one change (with an external calculator).
QUESTION: Kelian has two recipes for preparing dishes, one having 20 instructions and the second one having
twice as many instructions as the first one. How many instructions does Kelian have to read to prepare the two
dishes?
MODEL ANSWER (INCORRECT; ONE STEP MISSING ERROR): Kelian has two recipes. One has 20 instruc-
tions. The other has twice as many instructions as the first one. So the second one has 40 instructions. So Kelian | Chain-of-Thought Prompting Elicits Reasoning in Large Language Models |
7
1/8x1/4x1/2x1x8xDataset scale (ratio)1.01.52.02.53.03.5Chamfer distance (cm)3.3392.9682.9322.6821.762.0241.781.4791.351.0951.3361.2661.2191.1421.036PIFuPaMIRICONSMPL-XLoose ClothesBody Fitting FailureUnseen Camera(a) ICON reconstructions for in-the-wild images with extreme poses (Sec. 5.1).
(b) Avatar creation from images with SCANimate (Sec. 5.2). The input per-frame meshes are reconstructed with ICON.
Figure 8. ICON results for two applications (Sec. 5). We show two views for each mesh, i.e., a front (blue) and a rotated (bronze) view. | ICON |
popular journalistic narrative, that online hate speech did not increase either
over the course of Donald Trump’s 2016 campaign or in the aftermath of his
unexpected election. Using a dataset of more than 1 billion tweets, their results
are
a machine-
learning–augmented dictionary-based approach or a community-based | Social_Media_and_Democracy |
1
INTRODUCTION | DISTIL-WHISPER |
Figure 11: Impact of different projector architectures and output dimension on popular
methods.x − y − z denotes a MLP with layers of output dimension x,y ad z respectively.
From Garrido et al. [2022b].
Influence of the backbone’s output dimension. Recent works also investigated the
effect of the backbone dimension. Dubois et al. [2022] observed that larger backbone
24 | A Cookbook of Self-Supervised Learning |
In this work, we focus particularly on the role of knowledge graphs in the context of Explainable Machine Learning.
Knowledge Representation has a long tradition in manipulating, creating, standardising, and publishing structured knowl-
edge. In the last two decades, efforts have been focusing towards scaling up techniques to deal with the pervasive nature of
the Web [9]. Semantic technologies allow to easily access knowledge sources on the Web, while symbolic representations in
the form of ontologies, knowledge bases and graphs data-bases allow to formalise and capture knowledge and data about
specific domains as well as general, encyclopaedic, knowledge. Such formal knowledge, which we will refer to as knowledge
graphs, is machine readable, mostly publicly accessible and, more importantly, linked across domains – allowing machines | Knowledge graphs as tools for explainable machine learning: A survey |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.