text
stringlengths
1
1k
title
stringclasses
230 values
7% and 21% of model training has a majority of pronouns modified such that their grammatical gender is feminine rather than masculine. We demonstrate that such interven- tions are successful at reducing bias measures on a targeted benchmark, and propose these counterfactual interventions and retrainability of portions of our models as a key tool for future study of the influence of training corpora on model behavior.
Pythia- A Suite for Analyzing Large Language Models Across Training and Scaling
Artificial neural models have, for some time, ex- hibited the ability to achieve significant success on specific tasks when trained on those tasks (Devlin et al., 2019; Liu et al., 2019). PLMs in particu- lar have demonstrated this even in the few-shot setting (Hofer et al., 2018; Radford et al., 2019; Brown et al., 2020; Gao et al., 2020). Such per- formance after training on a task is not considered “emergent”, precisely because models are trained on that very task. Indeed, the fact that LLMs are not trained on the tasks used in evaluating their emer- gent abilities is central to identifying abilities which are truly emergent. The assertion that achieving satisfactory performance on a given task signifies the emergence of associated ‘abilities’ hinges on the condition that models are not explicitly trained for that specific task. Alternatively, it would sug- gest that a model possesses the expressive power required to undergo training for the said task. Only
AreEmergentAbilitiesinLarge Language Models just In-Context
N/A N/A 23.5% N/A 8 M2UGen A PREPRINT 6 Conclusion and Future Work This paper introduces the M2UGen model, which uti- lizes a large language model (LLM) to achieve music un- derstanding and multi-modal music generation within a unified framework. Furthermore, we present a compre- hensive methodology for generating the datasets used to train our model. The experiments show that our proposed M2UGen model outperforms or achieves SOTA perfor- mance in various tasks, including music understanding, music editing, and text/image/video-to-music generation. Our future work will focus on further enhancing the model’s fine-grained music understanding capabilities, as well as improving the correlation between generated mu- sic and input instructions.
M2UGen
Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, and Tat-Seng Chua. Next-gpt: Any-to-any multimodal LLM. CoRR, abs/2309.05519, 2023b. Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. Soundstream: An end-to-end neural audio codec. IEEE ACM Trans. Audio Speech Lang. Process., 2022. Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. Speechgpt: Em- powering large language models with intrinsic cross-modal conversational abilities. CoRR, abs/2305.11000, 2023a. Xin Zhang, Dong Zhang, Shimin Li, Yaqian Zhou, and Xipeng Qiu. Speechtokenizer: Unified speech tokenizer for speech large language models. CoRR, abs/2308.16692, 2023b.
Qwen-Audio
where the tags <Image>, <Response>, <Condition> and <Content X> serve as placeholders for inserting the embeddings of visual image, the text response, the text condition, and the text option content. The condition of instruction with condition 17 OPTIONs: <Image> <Condition>ASSISTANT: Task-specific ResponsesUSER-INPUTs: Task-specific Prompt is from the nouns of ground truth response, and the candidate options of instruction for ranking are from the Oogiri data with multiple answers. Besides, we illustrate the instructions for selection taking 3T1 selection as an example. For other types of selection instructions, only minor modifications to the number of options and quantifiers are needed.
Let’sThinkOutsidetheBox
We also conduct ablation experiments on the multimodal- ity memory and retrieval methods. We set JARVIS-1 w/o memory module as the baseline agent. We first evaluate JARVIS-1’s performance with different memory size (rep- resenting different learning stages) as shown in Figure 6, which demonstrates the effectiveness of self-improving within JARVIS-1. We further conduct the experiments on a subset of Minecraft tasks using three different retrieval methods: retrieval with instruction embedding only (T), rea- soning + retrieval with text embedding (T+R), and reasoning + retrieval with multimodality embedding (M+R). Except the memory and retrieval methods, all others are kept same. The results are listed in Table 4. The experiments show that reasoning before retrieval can ef- fectively improve retrieval accuracy. And retrieval based on multimodal state including vision observation and symbolic information (e.g., inventory, location etc) is better than only considering the text embedding.
JARVIS-1
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466, 2019. doi: 10.1162/tacl_a_00276. URL https:// aclanthology.org/Q19-1026. Faisal Ladhak, Esin Durmus, Claire Cardie, and Kathleen McKeown. WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4034–4048, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.360. URL https://www.aclweb.org/ anthology/2020.findings-emnlp.360. Leblond et al. AlphaCode 2 Technical Report. 2023. URL https://storage.googleapis.com/
gemini_1_report
Parameter-Efficient Transfer Learning for NLP Neil Houlsby 1 Andrei Giurgiu 1 * Stanisław Jastrze¸bski 2 * Bruna Morrone 1 Quentin de Laroussilhe 1 Andrea Gesmundo 1 Mona Attariyan 1 Sylvain Gelly 1 9 1 0 2 n u J 3 1 ] G L . s c [ 2 v 1 5 7 0 0 . 2 0 9 1 : v i X r a Abstract
Parameter-Efficient Transfer Learning for NLP
5Appendix A.2 describes example stability challenges, such as FP16 mixed precision training causing numerical underflows. ©2023 Cerebras Systems Inc. All Rights Reserved. 7 Cerebras-GPT: Open Compute-Optimal Language Models Figure 5: Percentage loss increase relative to Cerebras-GPT scaling law plotted against training FLOPs. Second, models show significantly more predictable scaling. Figure 5 plots the percentage loss increase for each SP and µP model relative to the SP scaling law (negative values are improved loss). µP models show an average of 0.43% better Pile test loss compared to the Cerebras-GPT SP scaling law fit. Further, µP models show substantially lower variance with just 0.04% standard deviation relative to the SP scaling law, while SP models show deviation 0.66% (∼16× more noisy). For perspective, the run-to-run standard deviation in loss when using different initialization and data random seeds is around 0.35%.
Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster
4.2 Melody evaluation
Simple and Controllable Music Generation
# Show the plot plt.show() This code should create a figure with two subplots, each showing a scatterplot of "HE pass@1" vs "MBPP pass@1" for a different subset of the data. The first subplot uses the data where "decoding" is equal to 0.1, while the second subplot uses the data where "decoding" is equal to "greedy". The colors, styles, and sizes of the points are determined by the "Fine-tuning", "Capabilities", and "Model size" columns, respectively. The legend for the second subplot is placed outside of the plot, on the right side, and the grid is added to each subplot using dotted grey lines. The axes are shared between the two subplots, and the layout is adjusted to fit the figure. # Share the axes ax1.sharex(ax2) ax1.sharey(ax2) # Adjust the layout fig.tight_layout() Generated figure:
CodeLlama2
(:init (on b5 b3) (on b4 b2) (on b2 b1) (on b3 b4) (clear b5) (empty)) (:goal (and (on b1 b2) (on b3 b5) (on b4 b1))) In-Context Learning 3.2 LLMs are known to be capable of in-context learning without finetuning their parameters. By in- context learning, we mean LLMs’ ability to perform unseen downstream tasks by simply condition- ing on a few input-label pairs (demonstrations) [10]. Here is a pair of problem descriptions in natural language and in PDDL provided to GPT-4 as context. When the context is included with the prompt from the example above, the resulting PDDL problem file is directly solvable by the planner. An Example PDDL Problem File Written by GPT-4 with Context
LLM+P- Empowering Large Language Models with Optimal Planning Proficiency
ety of criteria compared with existing music generation models. Lastly, to promote the open- source culture, we provide a collection of open- source libraries with the hope of facilitating future work in the field.1
Moûsai
Previous research suggests that not all corrections are effective in reducing individuals’ reliance on misinformation. There are two pathways through which misinformation might continue to shape attitudes and behaviors post- correction: the continued influence effect and backfire effects. Engrained in the former is the notion that corrections are somewhat, but not entirely, effective at dispelling misinformation. More concerning, however, are the latter, in which corrections not only fail to reduce but actually strengthen beliefs in the original misinformation. Neither of these phenomena offers a particularly sanguine take on the ability to curtail the spread of misinformation. However, each offers its own unique predictions about the most promising avenues for corrections. We begin by reviewing the extant literature on backfire effects and then turn to the continued influence effect. backfire effects corrections of misinformation may, under
Social_Media_and_Democracy
Our method admits three potential sources of error, quanti- fied by the following residuals: b) − q(θ(cid:96) d(cid:89) b) p(xj|θ(cid:96) (cid:15)1 := (cid:15)1((cid:96), b) := p(θ(cid:96) (cid:15)2 := (cid:15)2((cid:96), b, x) := q(xj; ψ(cid:96) (4) (3) b,j) b) − d(cid:89) b) − d(cid:89) j=1 j=1 j=1 (cid:15)3 := (cid:15)3((cid:96), b, x) := p(x|θ(cid:96) p(xj|θ(cid:96) b) (5) We refer to these as errors of coverage, density, and conver- gence, respectively. Observe that (cid:15)1 is a random variable that depends on (cid:96) and b, while (cid:15)2, (cid:15)3 are random variables depending on (cid:96), b and x. We suppress the dependencies for ease of notation. Lemma 1. The error of our estimator satisfies the following bound: MISE(p, q) ≤ 2B−2 E α2 + β2 dx , α := p(θ(cid:96) b)(cid:15)3 and where (cid:88) (cid:88) (cid:96),b:x∈X (cid:96) b (cid:16) β := (cid:96),b:x∈X (cid:96) b p(θ(cid:96) b)(cid:15)2 + (cid:15)1 p(xj|θ(cid:96) b) − (cid:15)1(cid:15)2
Adversarial Random Forests for Density Estimation and Generative Modeling
Foundation of Generalization: Interface Unification. To facilitate knowledge transfer among tools, it is critical to design a unified interface that enables the model to manipulate various tools in a consistent and standardized manner, which serves as the foundation for generalizable tool learning. Through a unified interface, models can identify and abstract essential features of tools more easily in a unified tool protocol rather than grappling with the difficulty of understanding various tool interfaces. Currently, the manipulation of tools is through predicting discrete action tokens, and the action space is not aligned in different scenarios, which prohibits the models from quickly adapting to new scenarios and tools. Inspired by the aspect we categorize tools in § 2.2, we identify three potential ways of interface unification: the semantic interface, GUI interface, and programming interface.
Tool Learning with Foundation Models
3 Shortcomings of Free-Text Pipelines We first analyze “faithful-by-construction” pipeline models (I→R;R→O) for free-text rationalization with respect to two properties: quality of gener- ated rationales (§3.1) and appropriateness of the sufficiency assumption (§3.2). 3.1 Joint Model Rationales are More Indicative of Labels Rationales should be a function of the input and the predicted label. To demonstrate why this is the case, consider training an I→R model on a dataset with multiple annotation layers, e.g., OntoNotes, that contains word sense, predicate structure, and coref- erence (Pradhan et al., 2007). Without additional task-specific input, this model would produce the 4The predicted label is from the same system that pro- duced the predicted rationale.
Measuring Association Between Labels and Free-Text Rationales
United States v. Alvarez, 276 Index Urban study of content notice and takedown, user characteristics, in hate speech detection, user level liability, pros and cons to applying to platforms, 272–273 “us vs. them,” to identify othering in hate 226–227 60, 63 speech, 60 vaccine debate, bot manipulation of, 100 Van Alstyne, Marshall, 36 van der Linden, S., 180 van Dijck, José, 147 VanDuyn, E., 25 Vargo, C. J., 23 Viacom v. YouTube, 232 violence hate speech and, 56, 57, 58, 67, 69, 70–71 online speech connection to, 241 polarization on social media and, 49 virtual bystander effect, corrections of misinformation on social media, 185 Volokh, Eugene, 237 Vosoughi, S., 22 Walter, N., 184 Warner, Mark, 199 weak ties sharing of news by, 35 as sources of counter-attitudinal information, 41 web crawlers, limitations in gathering advertising data, 130, see also bots
Social_Media_and_Democracy
misinformation detection w.r.t. partisan leanings and how it is propagated to language models even further to downstream tasks.
DataManagementForLargeLanguageModels-ASurvey
Education High school or some college College degree Graduate or professional degree Prefer not to say Other Disability Hearing difficulty Vision difficulty Cognitive difficulty Ambulatory (mobility) difficulty Self-care difficulty None General Workers (n=115) Select Workers (n=28) 54 60 1 94 5 14 1 1 0 29 39 27 16 2 2 2 3 10 1 1 1 94 2 1 40 62 12 0 1 0 1 1 4 1 106 47.0% 52.2% 0.9% 81.7% 4.3% 12.2% 0.9% 0.9% 0% 25.2% 33.9% 23.5% 13.9% 1.7% 1.7% 1.7% 2.6% 8.7% 0.9% 0.9% 0.9% 81.7% 1.7% 0.9% 34.8% 53.9% 10.4% 0% 0.9% 0% 0.9% 0.9% 3.5% 0.9% 92.2% 15 13 0 25 2 0 1 0 2 11 12 3 0 0 0 0 3 1 1 0 0 19 4 0 5 16 4 2 1 1 1 0 1 0 25 53.6% 46.4% 0% 89.3% 7.1% 0% 3.6% 0% 7.1% 39.3% 42.9% 10.7% 0% 0% 0% 0% 10.7% 3.6% 3.6% 0% 0% 67.9% 14.3% 0% 17.9% 57.1% 14.3% 7.1% 3.6% 3.6% 3.6% 0% 3.6% 0% 89.3% Figure 44 Crowdworker demographics. 68 MMLU (Multiple choice) This eval has 4 choices per question, but we show two examples here.
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
embedding benchmark. ArXiv, abs/2210.07316, 2022. [41] Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas A. Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David P. Schnurr, Felipe Petroski Such, Kenny Sai-Kin Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. Text and code embeddings by contrastive pre- training. ArXiv, abs/2201.10005, 2022. [42] Duc Tam Nguyen, Chaithanya Kumar Mummadi, Thi-Phuong-Nhung Ngo, Thi Hoai Phuong Nguyen, Laura Beggel, and Thomas Brox. SELF: learning to filter noisy labels with self- ensembling. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview. net/forum?id=HkgsPhNYPS.
E5
Experiments. Hyung Won Chung, Le Hou, Shayne Longpre, Jason Wei, Yi Tay, Barret Zoph, Xuezhi Wang, William Fedus, Yunxuan Li, Siddhartha Brahma, Adams Yu, Xinyun Chen, Shixiang Shane Gu, Sharan Narang, Albert Webson, Adam Roberts. Training infrastructure. Le Hou, Hyung Won Chung, Shayne Longpre, Jason Wei, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Adam Roberts, Yanping Huang, Gaurav Mishra. Evaluation design. Jason Wei, Hyung Won Chung, Shayne Longpre, Le Hou, Barret Zoph, Yunxuan Li, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Slav Petrov, Jacob Devlin, Adam Roberts. Building datasets. Le Hou, Shayne Longpre, Jason Wei, Hyung Won Chung, Mirac Suzgun, Barret Zoph, William Fedus, Yi Tay, Vincent Zhao, Zhuyun Dai, Adam Roberts. Framing and paper writing. Jason Wei, Hyung Won Chung, Quoc V. Le, Le Hou, Shayne Longpre, Yi Tay, Jacob Devlin, Jeff Dean, Denny Zhou, Aakanksha Chowdhery, Andrew Dai, Slav Petrov.
Scaling Instruction-Finetuned Language Models
Specifically, we found that information generated by the model is most likely to be useful for individuals and non-state actors who do not have access to formal scientific training. The model can provide general information on common proliferation pathways, including historical attempts at proliferation that were successful. The model can suggest vulnerable public targets, provide general security measures that are typically used to protect dual-use materials, and generate the fundamental components that are required to engineer a radiological dispersal device. The model readily re-engineered some biochemical compounds that were publicly available online, including compounds that could cause harm at both the individual and population level. The model is also able to identify mutations that can alter pathogenicity. Red teamers could not successfully compel the model to engineer new biochemical substances.
gpt-4-system-card
Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yun- cong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems Datasets and Bench- marks, 2022. 1, 2, 3, 8, 12, 16, 18, 19 William H Guss, Brandon Houghton, Nicholay Topin, Phillip Wang, Cayden Codel, Manuela Veloso, and Ruslan Salakhutdi- nov. Minerl: A large-scale dataset of minecraft demonstrations. arXiv preprint arXiv:1907.13440, 2019a. 1, 18 Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023b. 4, 5, 7, 8, 9, 12
JARVIS-1
Table 3: Descriptions and examples from one task not found to be emergent (Tracking Shuffeled Objects), one task previously found to be emergent (Logical Deductions), and one task found to be emergent only in GPT-4 (GSM8K). A similar list of all 22 of the tasks that we use in our experiments is presented in Appendix A, Table 8. to experiment with it. Furthermore, we introduce an additional adver- sarial prompt in the closed format, where the cor- rect answer choice is indicated by a corresponding letter rather than the text itself. This adversarial style of prompting provides us with two advantages: a) it requires the model to rely on its instructional understanding, as it needs to associate the letter with the appropriate choice, and b) it allows models to output answers with a single letter, simplifying the process of generating an answer, allowing for the possibility that they possess the knowledge to 10 Prompt format default, closed completion, closed adversarial, closed
AreEmergentAbilitiesinLarge Language Models just In-Context
that VOYAGER is able to utilize the learned skill library in a new Minecraft world to solve novel tasks from scratch, while other methods struggle to generalize.
VOYAGER- An Open-Ended Embodied Agent with Large Language Models
A Review of Deep Learning Techniques for Speech Processing 101 [433] Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for ASR based on lattice-free MMI.. In Interspeech. 2751–2755. [434] Rohit Prabhavalkar, Kanishka Rao, Tara N Sainath, Bo Li, Leif Johnson, and Navdeep Jaitly. 2017. A Comparison of sequence-to-sequence models for speech recognition.. In Interspeech. 939–943. [435] Ryan Prenger, Rafael Valle, and Bryan Catanzaro. 2019. Waveglow: A flow-based generative network for speech synthesis. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 3617–3621.
AReviewofDeepLearningTechniquesforSpeechProcessing
12 Image classification Video classification iNat2018 iNat2021 Places205 Arch Feature OpenCLIP ViT-G/14 ViT-H/14 MAE ViT-B/8 DINO ViT-L/16 iBOT ViT-S/14 ViT-B/14 ViT-L/14 ViT-g/14 DINOv2 73.0 31.0 59.6 66.3 69 76.4 80.4 81.6 76.0 32.3 68.3 74.6 74.2 81.1 85.1 85.7 69.8 52.4 60.4 64.4 62.9 66.2 67.3 67.5 K400 UCF-101 78.3 54.2 64.5 72.6 67.8 73.2 76.3 78.4 90.7 70.6 85.0 88.6 87 89.1 90.5 91.2 SSv2 35.8 29.2 32.6 38.7 33.1 34.4 35.6 38.3 Table 7: Linear evaluation on other image and video classification. The image benchmarks contain a large quantity of fine-grained examples about objects or scenes. The video benchmarks cover action classification and human-object interaction. All the features are frozen with a linear probe on top. SUN Arch Cars Aircr VOC DTD Feature Food OpenCLIP ViT-G/14 94.5 63.9 MAE ViT-H/14 78.4 70.3 85.1 ViT-B/8 DINO 75.6 91.0 ViT-L/16 iBOT 74.4 89.1 ViT-S/14 77.3 ViT-B/14 92.8 ViT-L/14 78.7 94.3 ViT-g/14 94.7 99.5 94.4 78.7
DINOv2- Learning Robust Visual Features without Supervision
recommender systems. The use of listwise ranking is found to strike the best balance between cost and performance. Furthermore, ChatGPT shows promise in addressing the cold-start problem and providing interpretable recommendations. Moreover, the research by Yuan et al. [227] and Li et al. [103] demonstrated the promising potential of the modality-based recommendation model (MoRec) and text-based collaborative filtering (TCF) in recommendation systems.
ASurveyonEvaluationofLargeLanguageModels
18.5 18.2 1.9 0.7 3.2 2.7 3.4 5.2 17.3 23.2 43.0 51.9 1.5 2.0 1.5 2.3 8.6 2.5 83.3 81.3 90.9 89.1 29.4 31.4 74.7 78.2 93.9 91.9 21 Table 4: Standard prompting versus chain of thought prompting on five commonsense reasoning benchmarks. Chain of thought prompting is an emergent ability of model scale—it does not positively impact performance until used with a model of sufficient scale. Model UL2 LaMDA 420M 20B 2B 8B 68B 137B 350M 1.3B 6.7B 175B - GPT Codex PaLM 8B 62B 540B CSQA StrategyQA Date Sports SayCan standard CoT standard CoT standard CoT standard CoT standard CoT 20.0 41.7 7.5 7.5 8.3 8.3 28.3 33.3 35.0 42.5 43.3 46.6 12.5 0.8 20.8 9.2 17.5 35.0 81.7 87.5 85.8 88.3 34.2 40.0 65.8 70.0 80.8 91.7 57.9 65.3 50.0 49.7 49.3 57.5 50.0 52.1 55.2 77.5 59.5 85.8 33.8 41.6 0.0 26.9 0.0 4.4 69.6 82.4 71.7 98.5 55.1 75.2 72.1 93.6 80.5 95.4 13.5 14.0 1.6 1.9 8.0 6.8 9.5 5.4 15.5 18.6 21.5 26.8 4.3 0.9 1.4 4.0 8.9 4.9 43.8 52.1 49.0 64.8 12.9 13.1 29.8 44.7 49.0 65.3
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
In fact, the visible pixels of the facial texture by the given camera pose are directly recoverable from the input image via inverse rasterization of the fitted 3D mesh. Therefore, we cast the 3D face reconstruction problem as an image inpainting task in the UV space; i.e. the goal is to fill in the missing pixels in a consistent manner with respect to some statistical prior. In particular, we propose to use a diffusion model as the generative backbone of our method. Diffusion models [62] are naturally associated with guided image synthesis since they treat image generation as a se- quence of denoising steps in the form of a learnable Markov process. This allows to directly interfere with the sampling process, given that samples at each part of the chain are dis- torted versions of real images with known noise variances. Thus, by properly modifying the sampling process, a sin- gle unconditional diffusion model can be used for differ-
Relightify-Relightable3DFacesfromaSingleImageviaDiffusionModels
In the rapidly evolving domain of Natural Language Processing (NLP), the race towards higher model performance often necessitates an escalation in model size. However, this scaling tends to increase computational costs and inference latency, thereby raising barriers to deployment in practical, real-world scenarios. In this context, the search for balanced models delivering both high-level performance and efficiency becomes critically essential. Our model, Mistral 7B, demonstrates that a carefully designed language model can deliver high performance while maintaining an efficient inference. Mistral 7B outperforms the previous best 13B model (Llama 2, [26]) across all tested benchmarks, and surpasses the best 34B model (LLaMa 34B, [25]) in mathematics and code generation. Furthermore, Mistral 7B approaches the coding performance of Code-Llama 7B [20], without sacrificing performance on non-code related benchmarks.
Mistral7B
5.3 6 Conclusion 7 Acknowledgements 8 Author Contributions 9 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Model Release . 9.2 Implementation Details and UL2 code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Details of Supervised Finetuning SOTA runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Details of Prompts for few-shot and zero-shot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 21 22 23 24 24 25 25 26 26 36 36 36 38 39 3
UL2- Unifying Language Learning Paradigms
Clinical trial recommendation Let us imagine a health care company that uses an AI system to supports cancer-diagnosed pa- tients in finding experimental treatments (early access programs or EAPs). A patient provides the system with a description of his medical history (relevant documents, symptoms, diagnosis, etc.), which in turn extracts the salient information using text analysis, then finds the list of candidate clinical trials from the company’s knowledge base using the search engine component. A medical expert in the company then has to verify the identified trials based on his expertise, and present a full report to the patient. The expert’s time and efforts could be considerably reduced when combining the patient’s data with structured knowledge such as medical ontologies, thesauri, PubMed’s evidence about previous studies, and provide the
Knowledge graphs as tools for explainable machine learning: A survey
3.7.4 Visual Evaluation Another way to evaluate what information is contained or not in a representation is to use a decoder over the representation that is able to map back this information to pixel space. Some methods like [He et al., 2022] are built with a specific decoder which make such visual analysis easy, however most SSL methods aren’t shipped with a decoder. To alleviate this issue and to allow researchers to visualize what can be learned by any type of SSL method, Bordes et al. [2022b] suggest training a conditional generative diffusion model using a SSL representation as conditioning. By analyzing which information remains constant across different generated samples using a specific conditioning and what information does not remain constant (because of the stochasticity in the generative model), one can get some hints about what information is contained in the representation. If a representation encodes every information about each pixel, the conditional generative model would
A Cookbook of Self-Supervised Learning
researchersconfirmedthatdemandtypesrelatetodemanddistribution shapes[21,22]. Whiledemandforecastingcanbeconceivedasatimeseriesforecast- ingproblem,itcanalsobeframedasasupervisedregressionlearning
Knowledge-graph-based-rich-and-confidentiality-preserving-Ex_2022_Informatio
a blank symbol representing gaps between output symbols and computes the loss function by summing probabilities across all possible paths. The loss function encourages the model to assign high probabilities to correct output symbols and low probabilities to incorrect output symbols and the blank symbol, allowing the model to predict sequences of varying lengths. The CTC loss is commonly used with RNNs such as LSTM and GRU, which are well-suited for sequential data. CTC loss is a powerful tool for training neural networks to perform sequence-to-sequence tasks where the input and output sequences have varying lengths and mappings between them are not one-to-one.
AReviewofDeepLearningTechniquesforSpeechProcessing
O D E L A C C U R A C Y T h e s y s t e m ʼ s e f f i c i e n c y a n d e f f e c t i v e n e s s a r e h e a v i l y d e p e n d e n t o n t h e a c c u r a c y o f G P T - 4 a n d t h e L a n g C h a i n f r a m e w o r k . I f t h e m o d e l ʼ s p r e d i c t i o n s o r g e n e r a t e d t a s k s a r e i n c o r r e c t o r i r r e l e v a n t , t h e s y s t e m m a y s t r u g g l e t o c o m p l e t e t h e d e s i r e d t a s k s e f f e c t i v e l y o r m a y p r o d u c e u n d e s i r e d r e s u l t s . 5 . 4 S Y S T E M O V E R L O A D A N D S C A L A B I L I T Y A s t h e s y s t e m g e n e r a t e s n e w t a s k s b a s e d o n c o m p l e t e d r e s u l t s , t h e r e i s a r i s k o f s y s t e m o v e r l o a d i f t h e t a s k g e n e r a t i o n r a t e e x c e e d s t h e c o m p l e t i o n r a t e . T h i s m a y
Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain for Diverse Applications – Yohei Nakajima
General reasoning abilities are evidenced by frontier AI producing remarkably apt responses to novel questions, For example, PaLM’s ability to understand the humour behind jokes which had never before been told.57 However, there is also evidence that models rely heavily on memorisation and basic heuristics: ● LLMs perform less well when a question is reworded to make it different from text that is in their training data.58 ● LLMs often solve complex problems using overly-simple heuristics that would fail to solve other similar problems.59 ● There are instances where LLMs fail to apply information from their training data in very basic ways.60 Beyond an uncertain ability to generalise to new contexts, other key limitations of current frontier AI models include: ● Hallucinations: AI systems regularly produce plausible yet incorrect answers and state
Capabilities and risks from frontier AI
realistic summarization tasks. Experiments demon- strate reduced hallucination for two 13B parameter LLMs, highlighting the effectiveness of synthetic data for mitigating undesired behaviors.
AComprehensiveSurveyofHallucinationMitigationTechniquesinLarge LanguageModels
In Proceedings of the AAAI conference on artificial intelligence. [148] Ratish Puduppully and Mirella Lapata. 2021. Data-to-text generation with macro planning. Transactions of the Association for Computational Linguistics (2021). [149] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. OpenAI Blog 1, 8 (2019), 9. [150] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1–67. http://jmlr.org/papers/v21/20-074.html [151] Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence Level Training with Recurrent Neural Networks. ICLR (2016).
SurveyofHallucinationinNatural Language Generation
Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., and Kuksa, P. Natural language pro- cessing (almost) from scratch. Journal of machine learn- ing research, 12(ARTICLE):2493–2537, 2011. Conneau, A., Ma, M., Khanuja, S., Zhang, Y., Axelrod, V., Dalmia, S., Riesa, J., Rivera, C., and Bapna, A. Fleurs: Few-shot learning evaluation of universal representations of speech. arXiv preprint arXiv:2205.12446, 2022. Del Rio, M., Delworth, N., Westerman, R., Huang, M., Bhandari, N., Palakapilly, J., McNamara, Q., Dong, J., Zelasko, P., and Jett´e, M. Earnings-21: a practical bench- mark for asr in the wild. arXiv preprint arXiv:2104.11348, 2021. Galvez, D., Diamos, G., Torres, J. M. C., Achorn, K., Gopi, A., Kanter, D., Lam, M., Mazumder, M., and Reddi, V. J. The people’s speech: A large-scale diverse english speech recognition dataset for commercial usage. arXiv preprint arXiv:2111.09344, 2021.
RobustSpeechRecognitionviaLarge-ScaleWeakSupervision
Diffusion models have been proven to be highly effective in various machine learning tasks related to computer vision, as well as speech-processing tasks. The recent development of DiffSep [482] for speech separation, which is based on score-matching of a stochastic differential equation, has shown competitive performance on the VoiceBank-DEMAND dataset. Additionally, Separate And Diffuse [357], another diffusion-based model that utilizes a pretrained diffusion model, currently represents the state-of-the-art performance in various speech separation benchmarks (refer to Table 11). These advancements demonstrate the significant potential of diffusion models in advancing the field of machine learning and speech processing.
AReviewofDeepLearningTechniquesforSpeechProcessing
transparency, Overall, the track record of corporate transparency measures for promoting good governance has been mixed. Across multiple domains, from development projects to the private sector, it has been said that “actual evidence on transparency’s impacts on accountability is not as strong as one might expect” (Fox 2007, p. 664). Corporate actors do not always play along and may only do the bare minimum without fully implementing voluntary or legislatively mandated transparency measures; as a comprehensive literature review of twenty-five years of transparency research notes, “the effects of transparency are much less pronounced than conventional wisdom suggests” (Cucciniello, Porumbescu, and Grimmelikhuijsen 2017, p. 32). Empirical work into the results of transparency initiatives has shown its important limitations – for https://doi.org/10.1017/9781108890960 Published online by Cambridge University Press 292 Robert Gorwa & Timothy Garton Ash
Social_Media_and_Democracy
3 Large Language Models Cannot Self-Correct Reasoning Yet Table 1: Results of GPT-3.5 and GPT-4 on reasoning benchmarks with the setting in Section 3.1.1. GSM8K CommonSenseQA HotpotQA GPT-3.5 GPT-4 Standard Prompting Self-Correct (Oracle) Standard Prompting Self-Correct (Oracle) 75.9 84.3 95.5 97.5 75.8 89.7 82.0 85.5 26.0 29.0 49.0 59.0 Other Setup. We prompt the models to undergo a maximum of two rounds of self-correction, using the default temperature (1.). Following Kim et al. (2023); Shinn et al. (2023); Welleck et al. (2023), we use the correct label to determine when to stop the self-correction loop. Table 1 summarizes the results. From these results, we observe significant performance improve- ments, consistent with the findings presented in Kim et al. (2023); Shinn et al. (2023). 3.1.2 RESULTS 3.1.3 REFLECTION 75.8 89.7 87.9 100 Table 2: Comparison of Self-Correct (Oracle) with a Random Baseline. CommonSenseQA GPT-3.5 GPT-4 82.0 85.5 91.0 100
LARGELANGUAGEMODELSCANNOTSELF-CORRECT REASONINGYET
sustainable way. • They empower individuals to self-organise and commit to being fair, transparent and accountable about the data and resources these contribute.
informatics-phd-projects-2022-23
Stable Audio: Fast Timing-Conditioned Latent Audio Diffusion — Stability AI https://stability.ai/research/stable-audio-efficient-timing-latent-diffusion 4/5
Stable Audio_ Fast Timing-Conditioned Latent Audio Diffusion — Stability AI
If we allow ourselves to dream multiple decades out, then it’s easy to imagine a future where Generative AI is deeply embedded in how we work, create and play: memos that write themselves; 3D print anything you can imagine; go from text to Pixar film; Roblox-like gaming experiences that generate rich worlds as quickly as we can dream them up. While these experiences may seem like science fiction today, the rate of progress is incredibly high—we have gone from narrow language models to code auto-complete in several years—and if we continue along this rate of change and follow a “Large Model Moore’s Law,” then these far-fetched scenarios may just enter the realm of the possible.  Call for Startups We are at the beginning of a platform shift in technology. We have already made a number of investments in this landscape and are galvanized by the ambitious founders building in this space.
Generative AI A Creative New World Sequoia Capital
2 Figure 3: RMT inference scales linearly with respect to the input sequence length. We estimate the required FLOP increase for the forward pass compared to running models on sequences with 512 tokens. a: lengths from 512 to 32,000 tokens. b: lengths from 32,000 to 2,048,000 tokens. The RMT segment length is fixed at 512 tokens. While larger models (OPT-30B, OPT-175B) tend to exhibit near-linear scaling on relatively short sequences up to 32,000, they reach quadratic scaling on longer sequences. Smaller models (OPT-125M, OPT-1.3B) demonstrate quadratic scaling even on shorter sequences. On sequences with 2,048,000 tokens, RMT can run OPT-175B with ×29 fewer FLOPs and with ×295 fewer FLOPs than OPT-135M.
Scaling Transformer to 1M tokens and beyond with RMT
As would be expected, scaling up the number of model parameters or the size of the dataset greatly improves model performance (see Figure A6 for scaling with dataset size). However, even when only 10 samples can be submitted, scaling up the total number of samples leads to massive improvements in model solve rate. Figure 6 shows how the model performance scales on the 10@𝑘 and 𝑝𝑎𝑠𝑠@𝑘 metrics with more samples, i.e. as we increase 𝑘. The difference between the two metrics highlights the importance of selecting which samples to submit. Figure 7 shows how performance scales with the amount of compute used for training and for sampling. These scaling curves highlight a few interesting facts about this problem domain and our models: Solve rates scale log-linearly with more samples. Both the 10@k and pass@k solve rates scale approximately log-linearly with 𝑘, with the 10@k curve bending down slightly at high sample budgets.
alphacode
Early successes like Wikipedia did not generalize into a broader principle that crowds could effectively and reliably filter for truth and against falsity.71 Regardless of its causal impact on voting behavior and political perceptions, the 2016 US presidential election cycle demonstrated in the very least that concerted efforts to spread disinformation can be wildly successful in being shared online rather than quickly weeded out. Organic filtration by the wisdom of the crowds was less robust against deliberate manipulation than originally expected (Hwang 2017).
Social_Media_and_Democracy
The force required to accelerate a 10 kg object at 5 m/s2 can be calculated using the formula: F = m * a where F is the force, m is the mass of the object, and a is the acceleration. So, the force required to accelerate a 10 kg object at 5 m/s2 would be: F = 10 kg * 5 m/s2 = 50 N When the weight is 2 kg, the force required to accelerate the object would be: F = 2 kg * 5 m/s2 = 10 N Therefore, the force required to accelerate a 10 kg object at 5 m/s2 is 50 N, while the force required to accelerate the same object when its weight is 2 kg is 10 N. The force required to accelerate a 10 kg object at 5 m/s2 is equal to the weight of the object, which is 2 kg x 9.8 m/s2 = 19.6 N. So the force required to accelerate the 10 kg object at 5 m/s2 is 19.6 N. Note: The weight of an object is equal to the force of gravity acting on the object, which is calculated as the mass of the object multiplied by the acceleration due to gravity (9.8 m/s2).
WizardLM- Empowering Large Language Models to Follow Complex Instructions
[10] Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, and Kyunghyun Cho. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine. arXiv:1704.05179 [cs], April 2017. URL http://arxiv.org/abs/1704.05179. arXiv: 1704.05179. [11] Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1082. URL https://www.aclweb.org/anthology/ P18-1082.
Retrieval-AugmentedGenerationfor Knowledge-IntensiveNLPTasks
6 . 4 P E R P L E X I T Y W I T H L O N G C O N T E X T S
StarCoder_paper (1)
framework tailored for structured pruning of LLMs offering task-agnostic compression and efficient data usage. LLM-Pruner integrates a dependency detection mechanism to identify interconnected structures in the model. It utilizes an effective importance estimation approach, combining both first-order data and estimated Hessian informa- tion. This approach streamlines the selection of prime groups for pruning, enhancing the compression procedure. [121] propose LoSparse (Low-Rank and Sparse approxima- tion), a novel model compression technique that approximates a weight matrix by the sum of a low-rank matrix and a sparse matrix. Pruning enhances the diversity of low- rank approximations, and low-rank approximation prevents pruning from losing too many expressive neurons. [122] further considers pruning the hidden dimension (e.g., embedding layers, layer normalization) of LLM besides pruning the attention heads and feed-forward layers. [123] proposed a new structured compression approach for
Beyond Efficiency
for PCs that take advantage of their ability to efficiently compute arbitrary marginal probabilities. Specifically, we first show which kinds of marginal probabilities are required for (de)compression. The proposed algorithm combines an inference algorithm that computes these marginals efficiently given a learned PC and SoTA streaming codes that use the marginals for en- and decoding. Competitive compression rates. Our experiments show that on MNIST and EMNIST, the PC-based compression algorithm achieved SoTA bitrates. On more complex data such as subsampled ImageNet, we hybridize PCs with normalizing flows and show that PCs can significantly improve the bitrates of the base normalizing flow models. Competitive runtimes. Our (de)compressor runs 5-40x faster compared to available implementations of neural lossless compressors with near SoTA performance on datasets such as MNIST.1 Our open-source implementation of the PC-based (de)compression algorithm can be found at https:
LOSSLESS COMPRESSION WITH PROBABILISTIC CIRCUITS
ered emergent from prior literature. Our observation that only two out of 14 previously-emergent tasks displayed emergence, and the fact that one of these tasks represents for- mal linguistic abilities and the other represents memorisation, casts doubt on claims that emer- gent tasks indicate LLM reasoning abilities. It also points to a need for a more thorough analysis of tasks along the lines of memorisability, data leak- age, quality (e.g., number of examples in the test set), and a classification of such tasks into those requiring formal linguistic abilities and functional linguistic abilities as defined by Mahowald et al. (2023).
AreEmergentAbilitiesinLarge Language Models just In-Context
Table 17: Examples of correct and incorrect chains of thought produced by LaMDA 137B on Date Understanding.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Lila and Tom are playing with their toys in the living room. Lila has a smooth doll with long hair and a pink dress. Tom has a horn that makes a loud noise when he blows it. Lila likes to comb her doll’s hair and make her look pretty. Tom likes to make his horn sound and scare Lila. ”Tom, stop it!” Lila says. ”Your horn is too loud. It hurts my ears.” ”But it is fun!” Tom says. ”Look, I can make it sound like a car, or a cow, or a lion!” He blows his horn again and again, making different noises. Lila covers her ears and frowns. She does not like Tom’s horn. She wants him to be quiet. ”Tom, please shut your horn!” Lila says. ”I want to play with my doll. She does not like loud noises. She likes soft music and nice words.” (rest of story omitted) Figure 18: The closest point in the dataset to an alternative completion 19 • When completing stories from the dataset, the completions usually turn out to be very different than the original story.
TinyStories-HowSmallCanLanguageModelsBeandStillSpeak CoherentEnglish?
25.0 50.0 24.4 29.3 14.3 14.3 20.0 20.0 28.1 34.4 31.8 13.6 50.0 37.5 56.1 73.2 28.6 50.0 70.0 71.9 71.9 18.2 36.4 50.0 56.2 48.8 75.6 42.9 42.9 40.0 50.0 71.9 75.0 36.4 36.4 50.0 50.0 56.1 75.6 50.0 42.9 40.0 50.0 71.9 65.6 40.9 40.9 7.1 80M T5-Small Flan-T5-Small 18.2 18.2 18.2 36.4 25.0 54.5 27.3 26.9 30.8 16.7 19.2 3.8 9.1 0.0 6.2 6.2 21.4 25.0 12.5 29.3 17.1 35.7 24.4 4.9 0.0 0.0 0.0 20.0 15.6 50.0 20.0 25.0 0.0 6.2 27.3 0.0 36.4 22.7 250M T5-Base Flan-T5-Base 18.2 9.1 23.1 26.9 25.0 72.7 45.5 27.3 27.3 19.2 26.9 41.7 0.0 9.1 18.8 25.0 24.4 22.0 14.3 27.3 18.2 25.0 37.5 26.8 14.6 28.6 42.9 40.0 20.0 37.5 28.1 45.5 31.8 20.0 20.0 25.0 0.0 9.4 780M T5-Large Flan-T5-Large 18.2 18.2 18.2 18.2 26.9 23.1 25.0 54.5 36.4 54.5 54.5 26.9 23.1 16.7 37.5 12.5 29.3 19.5 9.1 37.5 37.5 36.6 17.1 42.9 35.7 40.0 20.0 40.6 25.0 27.3 27.3 40.9 20.0 7.1 9.4 0.0 6.2 0.0 3B T5-XL Flan-T5-XL 9.1 18.2 18.2 19.2 23.1 41.7 72.7 36.4 36.4 36.4 38.5 46.2 33.3 9.1
Mixture-of-Experts
12 Weidinger, Iason Gabriel, William S. Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. Scaling language models: Methods, analysis & insights from training gopher. arXiv Preprint, 2021. URL https://arxiv.org/abs/2112.11446.
Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models
horizon text instructions generated by LLM. Therefore, it is worth exploring methods for generating plans that are easier for the controller to execute or improving the controller’s ability to follow instructions. 4.3. Ablation Studies 4.3.1. JARVIS-1 BASED ON DIFFERENT LMS We conducted ablation experiments on various Language Models, including OpenAI’s ChatGPT [Ouyang et al., 2022] and GPT-4 [OpenAI, 2023]. Among these models, GPT- 4 has more parameters and has been proven to outper- form ChatGPT in extensive research [Wang et al., 2023b]. We also select the open-source pre-trained LLaMA2 70B (LLaMA2 PT) model [Touvron et al., 2023]. Additionally, we gathered a substantial amount of Minecraft-related text from the internet as training data and further fine-tuned LLaMA2 13B (LLaMA FT). The experiments were con- 9 JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models
JARVIS-1
rich? Where there are particular concerns about the availability of material or the sensitivity of the topic you must clearly demonstrate the feasibility of the project. Third, you should describe how you intend to analyse your research materials. Will you be using statistical analysis (what kind?), discourse analysis, content analysis, constructing historical chronologies or analytic narratives – or, as is often the case in development studies, a combination of one or more of these methods? Be as specific as possible in describing the approach that you will use and be sure to discuss the advantages and potential limitations of your chosen method(s) and the biases of your sources.
Writing a DPhil Research Proposal
A Priority Map for Vision-and-Language Navigation with Trajectory Plans and Feature-Location Cues Jason Armitage University of Zurich Switzerland Leonardo Impett University of Cambridge UK Rico Sennrich University of Zurich Switzerland [email protected] [email protected] [email protected] Abstract
APriorityMapforVision-and-LanguageNavigation withTrajectoryPlansandFeature-LocationCues
[263] Solaiman, I., C. Dennison. Process for adapting language models to society (palms) with values-targeted datasets. Advances in Neural Information Processing Systems, 34:5861–5873, 2021. [264] Bach, S. H., V. Sanh, Z. X. Yong, et al. Promptsource: An integrated development environment and repository for natural language prompts. In V. Basile, Z. Kozareva, S. Stajner, eds., Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - System Demonstrations, Dublin, Ireland, May 22-27, 2022, pages 93–104. Association for Computational Linguistics, 2022. [265] Iyer, S., X. V. Lin, R. Pasunuru, et al. OPT-IML: scaling language model instruction meta learning through the lens of generalization. CoRR, abs/2212.12017, 2022. [266] Winston, P. H. Learning and reasoning by analogy. Commun. ACM, 23(12):689–703, 1980.
TheRiseandPotentialofLargeLanguageModel BasedAgents
Figure 2: Trade-off between NFE and different metrics of interest. audio, the shorter audio is used as the prompt. Results are shown in Figure 3. As expected, WER mildly decreases and SIM-r grows quickly and flattens with longer audio prompts. Comparing against VALL-E, Voicebox is more efficient at leveraging an audio prompt, achieving the same speaker similarity as VALL-E with roughly two thirds the input audio. (a) WER (b) Speaker Similarity Figure 3: WER and SIM-r as a function of prompt audio time in seconds for the Zero-shot TTS task 5.2. Audio is generated using classifier-free guidance strength (α) of 0.7 and midpoint ODE solver with a NFE of 32. The blue line is for Voicebox and the red star is VALLE at 3 seconds. The speaker similarity (SIM-r) remains same for longer prompts (up to 10s).
Voicebox-Text-GuidedMultilingual UniversalSpeechGenerationatScale
B a r d i s p a rt o f o u r l o n g - t e r m , o n g o i n g e ff o rt t o d e v e l o p L L M s r e s p o n s i b l y , a n d t h r o u g h o u t t h e c o u r s e o f t h i s w o r k , w e h a v e d i s c o v e r e d a n d d i s c u s s e d s e v e r a l . H e r e , w e f o c u s o n fi v e a r e a s t h a t w e c o n t i n u e t o w o r k o n : ( a ) a c c u r a c y : B a r d ’ s r e s p o n s e s m i g h t b e i n a c c u r a t e , e s p e c i a l l y w h e n a s k e d a b o u t c o m p l e x o r f a c t u a l t o p i c s ; ( b ) b i a s : B a r d ’ s r e s p o n s e s m i g h t r e fl e c t b i a s e s o r p e r s p e c t i v e s p r e s e n t i n i t s t r a i n i n g d a t a ; ( c ) p e r s o n a : B a r d ’ s r e s p o n s e s m i g h t s u g g e s t i t a s h a v i n g p e r s o n a l o p i n i o n s o r f e e l i n g
An overview of Bard- an early experiment with generative AI
Are Interventions Within the CDA 230 Framework Sufficient? As discussed in “Part II: How Does CDA 230 Shape Efforts to Combat Online Political Disinformation?,” CDA 230 does not function as a categorical block to potential challenges of political disinformation. Its impact is considerably more specific: It limits interventions that would serve to treat the platform as the publisher or speaker of an act, applying the liability of a given user to the platform as a whole. It does not hinder a range of potential legislative actions to mandate greater transparency from the platforms, enforce more robust disclosure on the part of users, or even modify the mechanics of how information is distributed by services like Facebook or Google. Nor does CDA 230 serve to block potential actions by courts using the precedent set in Roommates.com to selectively eliminate immunity. An immediate question is whether or not CDA 230 is merely a
Social_Media_and_Democracy
on a corpus that covers both biomedical articles and clinical notes, with the goal of building a unified and comprehensive model. However, it has been reported that models pre-trained on clinical notes can perform poorly on language tasks based on biomedical articles, and vice versa (Gu et al., 2021; Alsentzer et al., 2019a; Lehman et al., 2023). Addressing the substantial differences between text modalities is an open question that requires further investigation to improve the transferability of biomedical language models.
BiomedGPT
3.1 Fact Memorization The first task tests the ability of RMT to write and store information in memory for an extended time (Figure 4, top). In the simplest case, the fact is always located at the beginning of the input, and the question is always at the end. The amount of irrelevant text between the question and answer is gradually increased, so that the entire input does not fit into a single model input. Fact: Daniel went back to the hallway. Question: Where is Daniel? Answer: hallway 3
Scaling Transformer to 1M tokens and beyond with RMT
vision-based robotic manipulation. CoRR, abs/1806.10293, 2018. [362] Nguyen, H., H. M. La. Review of deep reinforcement learning for robot manipulation. In 3rd IEEE International Conference on Robotic Computing, IRC 2019, Naples, Italy, February 25-27, 2019, pages 590–595. IEEE, 2019. [363] Dasgupta, I., C. Kaeser-Chen, K. Marino, et al. Collaborating with language models for embodied reasoning. CoRR, abs/2302.00763, 2023. [364] Puig, X., K. Ra, M. Boben, et al. Virtualhome: Simulating household activities via programs. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 8494–8502. Computer Vision Foundation / IEEE Computer Society, 2018. [365] Hong, Y., Q. Wu, Y. Qi, et al. A recurrent vision-and-language BERT for navigation. CoRR, abs/2011.13922, 2020. [366] Suglia, A., Q. Gao, J. Thomason, et al. Embodied BERT: A transformer model for embodied,
TheRiseandPotentialofLargeLanguageModel BasedAgents
We express our gratitude to Jinze Bai, Shuai Bai, Peng Wang, Sinan Tan, Shijie Wang for their insightful discussion. We would like to thank Juan Zhu, Junyang Lin, Siqi Zheng, Jiaming Wang and Zhihao Du for their support of this project. 6 Acknowledgements References Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. NeurIPS, 2022. Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14. Springer, 2016. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. PaLM 2 technical report. arXiv:2305.10403, 2023.
Qwen-Audio
LONDON’S GLOBAL UNIVERSITY UCL Academic Manual 2022-23 Chapter 1: Student Recruitment and Admissions Framework Chapter 1 is UCL’s regulatory framework for the recruitment and admission of students to UCL.
UCL Academic Manual
References Ahdritz, G., Bouatta, N., Kadyan, S., Xia, Q., Gerecke, W., O’Donnell, T. J., Berenberg, D., Fisk, I., Zanichelli, N., Zhang, B., et al. Openfold: Retraining alphafold2 yields new insights into its learning mechanisms and capacity for generalization. bioRxiv, 2022. Andonian, A., Anthony, Q., Biderman, S., Black, S., Gali, P., Gao, L., Hallahan, E., Levy-Kramer, J., Leahy, C., Nestler, L., Parker, K., Pieler, M., Purohit, S., Songz, T., Phil, W., and Weinbach, S. GPT-NeoX: Large scale autoregressive language modeling in PyTorch, 8 2021. URL https://www.github.com/eleutherai/ gpt-neox. 106107108(a) 160 M0.00.20.40.60.81.0106107108(b) 1.0 B106107108(c) 2.8 B106107108(d) 12 B13000390006500091000117000143000 Pythia: A Suite for Analyzing Large Language Models
Pythia- A Suite for Analyzing Large Language Models Across Training and Scaling
249 Medium. (2015). Medium’s 2015 Transparency Report. Medium report. https://blog .medium.com/medium-s-2015-transparency-report-5c6205c48afe Meleagrou-Hitchens, A., & Kaderbhai, N. (2017). Research Perspectives on Online Radicalisation: A Literature Review, 2006–2016. VOX-Pol report. www.voxpol .eu/new-vox-pol-report-research-perspectives-online-radicalisation Munger, K. (2017). Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 39(3), 629–649. https://link.springer.com/article/ 10.1007/s11109-016-9373-5
Social_Media_and_Democracy
Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1–9, 2016. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. Guolin Ke, Di He, and Tie-Yan Liu. Rethinking positional encoding in language pre-training. In International Conference on Learning Representations, 2019. Yash Khare, Viraj Bagal, Minesh Mathew, Adithi Devi, U Deva Priyakumar, and CV Jawahar. Mmbert: multimodal bert pretraining for improved medical vqa. In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1033–1036. IEEE, 2021.
BiomedGPT
5 Experiments We conduct extensive experiments to answer the following three questions: 1. How well does LLM-AS-P work? That is, to what extent can LLMs be directly used for planning? (Not at all) 2. How well does LLM+P work compared to LLM-AS-P? (Much better) 3. What role does the context play in the success of LLM+P? (It’s crucial) To address these questions, we first introduce a set of benchmark problems borrowed from the classical planning community. Then, we comprehensively evaluate both LLM-AS-P and LLM+P on the proposed benchmark problems. In the end, we provide the results, analysis, and example failure cases that could lead to future improvement. 5.1 Benchmark Problems
LLM+P- Empowering Large Language Models with Optimal Planning Proficiency
Please do not hesitate to contact us if you have any questions. We are glad to meet you! Please note our information on data protection in the application process at https://www.tu-clausthal.de/universitaet/karriere-ausbildung/stellenangebote/hinweise-zum-daten‐ schutz-im-bewerbungsverfahren (https://www.tu-clausthal.de/universitaet/karriere-ausbildung/stellenangebote/hinweise-zum-datenschutz-im-bewerbungsverfahren) Application costs cannot be reimbursed. The application documents will be destroyed after completion of the selection process in compliance with legal requirements. Required Documents Required Documents Letter of Motivation CV Reports, certificates Transcripts Application Application https://www.isse.tu-clausthal.de/ Contact Technische Universität Clausthal Institut für Software and Systems Engineering Dr. Stefan Wittek
_2 Doctoral Researcher (m_w_d) in the field of Large Language Models (LLM) for Software Engineering_ - Technische Universität Clausthal - DAAD
SQL: SELECT COUNT(DISTINCT status) FROM city The execution of the SQL query above would return a table with 1 column. The first column, "COUNT(DISTINCT status)" would contain the number of different statuses of cities. So the SQL query returns a table with 1 column , the number of different statuses of cities. Feedback: As in your explanation, the SQL query returns a table with 1 column, the number of different statuses of cities. The question returns 1 column, the number of different statuses of cities. So the SQL prediction above is correct! CREATE TABLE user_profiles ( uid number , name text , followers number , primary key ( uid ) ) CREATE TABLE tweets ( id number , uid number , primary key ( id ) , foreign key ( uid ) references user_profiles ( uid ) ) Translate the following question into SQL. Question: Find the average number of followers for the users who had some tweets. SQL: SELECT AVG(followers) FROM user_profiles
Teaching Large Language Models to Self-Debug
Figure 4. Template. The models in the center are the predefined template mesh with landmarks. It can be seen that we refine the structure on specific regions, where a complex nose or tail may ex- ist. The colored regions and delineated lines denote the landmarks. These landmarks represent specific components of the character’s body, such as elbow and eye socket. During model crafting, artists are required to deform the template model while keeping the land- marks in the position where the original body components are. of 3DBiCar paves the way to learn a skinned parametric model, which we will discuss in Sec. 4. Richness. We provide various forms of data for each character. There are not only the 3D shape meshes and UV- space textures carefully crafted by artists but also collected reference images. For each character, artists are asked first 4
RaBit- Parametric Modeling of 3D Biped Cartoon Characters with a Topological-consistent Dataset
(cid:19) (cid:125) (cid:124) πref(y | x) exp =π(y|x), using Thm. 1 reparam. r(x, y) β = 1, (9) i.e., π(y | x) is a valid distribution (probabilities are positive and sum to 1). However, following Eq. 4, we can see that Eq. 9 is the partition function of the optimal policy induced by the reward function r(x, y). The key insight of the DPO algorithm is that we can impose certain constraints on the under-constrained Plackett-Luce (and Bradley-Terry in particular) family of preference models, such that we preserve the class of representable reward models, but explicitly make the optimal policy in Eq. 4 analytically tractable for all prompts x. 5.2 Instability of Actor-Critic Algorithms
Direct Preference Optimization
likely than liberals to engage in selective exposure, biased information processing, and ideological conformity (Lau and Redlawsk 2006; Garrett 2009b; Nyhan and Reifler 2010; Nam, Jost, and Van Bavel 2013; Guess et al. 2019), although other work has found symmetric patterns regarding these behaviors (Munro et al. 2002; Iyengar and Hahn 2009; Nisbet, Cooper, and Garrett 2015).
Social_Media_and_Democracy
is evaluated via accuracy and F1-score (or F1 macro-score for multiclass problems), as well as wall time. FORGE fares well in this experiment, attaining the top accuracy and F1-score in three out of five tasks. On a fourth, the highly imbalanced credit dataset, the only models that do better in terms of accuracy receive F1-scores of 0, sug- gesting that they entirely ignore the minority class. Only FORGE and RCC-GAN strike a reasonable balance between sensitivity and specificity on this task. Perhaps most impres- sive, FORGE executes over 60 times faster than its nearest competitor on average, and over 100 times faster than the second fastest method. (We omit results for algorithms that fail to converge in 24 hours of training time.) Differences in compute time would be even more dramatic if these deep learning algorithms were configured with a CPU backend (we used GPUs here), or if FORGE were run using more extensive parallelization (we distribute the job across 10
Adversarial Random Forests for Density Estimation and Generative Modeling
Proceedings of the 13th International Conference, KR 2012, Rome, Italy, 2012, pp. 446–456. [7] C. Bäckström, P. Jonsson, Bridging the gap between refinement and heuristics in abstraction, in: Proceedings of the 23rd International Joint Conference on Artificial Intelligence, IJCAI 2013, Beijing, China, 2013, pp. 2261–2267. [8] C. Bäckström, P. Jonsson, S. Ordyniak, S. Szeider, A complete parameterized complexity analysis of bounded planning, J. Comput. Syst. Sci. 81 (2015) [9] C. Bäckström, P. Jonsson, S. Ståhlberg, Fast detection of unsolvable planning instances using local consistency, in: Proceedings of the 6th Annual [10] C. Bäckström, I. Klein, Planning in polynomial time: the SAS-PUBS class, Comput. Intell. 7 (1991) 181–197. [11] C. Bäckström, B. Nebel, Complexity results for SAS+ planning, Comput. Intell. 11 (1995) 625–656. [12] J. Balcázar, The complexity of searching implicit graphs, Artif. Intell. 86 (1996) 171–188.
A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen
REGISTERED MODELS AND ML PRODUCTION Production models have undergone the experimentation phase and are then deployed in real-world applications. They are typically used to make predictions or decisions based on new data. Registering a model is the process of recording and storing metadata about a trained model in a centralized location that allows users to easily access and reuse existing models. Registering models prior to production enables organizations to ensure consistency and reliability in model deployment and scale. We have chosen registered models to represent ML production because the MLflow Model Registry is designed to manage models that have left the experimentation phase through the rest of their lifecycle. 13 2023 STATE OF DATA + AI Organizations test numerous approaches and variables before committing an ML model to production. We wanted to understand, “How many models do data scientists experiment with before moving to production?”
2023 state of ai databrick
Network testing. When testing, our network only re- quires an RGB image as input and outputs both the para- metric model and the reconstructed surface with texture. To maximize the performance, We run the body reference optimization step for all results unless otherwise stated. Fifty iterations are needed for the optimization and take about 40 seconds. Network complexity. We report the module complexity in Tab.1. To reconstruct the 3D human model given an RGB image, we use the hierarchical SDF querying method The number of parameters and execution time of each network module TABLE 1 Execution Time Module GCMR Geometry Network Texture Network ∗ Measured using one RGB image. † Measured using one RGB image and 10k query points. #Parameters 46,874,690 27,225,105 13,088,268 0.15s∗ 0.25s† 0.29s†
PaMIR- Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction
Each row in the matrix represents the concate- 5 051015202530Windowsize01020304050PercentageAggregatedUsageIncrementalTransfersagg(k)sagg(k+1)−sagg(k)000000000000000000000000000000Neurons to be deletedNew NeuronsNeurons from initial windowActive neurons in the initial windowActive neurons in the new windowInitial WindowOnceUponATimeThereWasAKidWhoHadADreamSliding Window OnceUponATimeThereWasAKidWhoHadADream (a) coactivation intensity (b) Closest friend (c) 4th closest friend (d) 8th closest friend
LLM in a flash
For dialog uses, we surprisingly find that dialog-prompting alone is more effective than control tokens at reducing toxic generation. This holds true even on the standard dataset, which aims to measure explicit forms of toxicity that are more closely align with the tagging method from pre-training using signals from the Perspective API. We do see small gains from layering control tokens on dialog prompting, but only on the standard dataset, as the adversarial dataset aims to measure a distinct construct from what was tagged at pre-training time.
PaLM 2 Technical Report
Q: Alice, Bob, and Claire are holding a white elephant gift exchange. At the start of the event, they are each holding a present of a different color: Alice has a pink ball, Bob has a yellow present, and Claire has a black ball. As the event progresses, pairs of people swap gifts. First, Bob and Alice swap their gifts. Then, Claire and Alice swap their gifts. Finally, Claire and Bob swap their gifts. At the end of the event, Claire has the? Choices: A.yellow present. B.black ball. C.pink ball. A: Reasoning process: At the start of the event, Alice has a pink ball, Bob has a yellow present, and Claire has a black ball. Bob and Alice swap their gifts, so Bob gets the pink ball and Alice gets the yellow present. Claire and Alice swap their gifts, so Claire gets the yellow present and Alice gets the black ball. Finally, Claire and Bob swap their gifts, so Claire gets the pink ball and Bob gets the yellow present. Final answer: C.
Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models
InformationFusion81(2022)91–102100 J.M. Rožanec et al. [22] D.Lengu,A.A.Syntetos,M.Z.Babai,Sparepartsmanagement:Linkingdistribu- tionalassumptionstodemandclassification,EuropeanJ.Oper.Res.235(2014) 624–635. [23] R. Saluja, A. Malhi, S. Knapič, K. Främling, C. Cavdar, Towards a rigorous evaluation of explainability for multivariate time series, 2021, arXiv preprint arXiv:2104.04075. [24] A. Dwivedi, M. Niranjan, K. Sahu, A business intelligence technique for forecasting the automobile sales using adaptive intelligent systems (ANFIS and ANN), Int. J. Comput. Appl. 74 (2013). [25] X. Wang, D. Zeng, H. Dai, Y. Zhu, Making the right business decision: Forecasting the binary NPD strategy in Chinese automotive industry with machinelearningmethods,Technol.Forecast.Soc.Change155(2020)120032. [26] D.S. Farahani, M. Momeni, N.S. Amiri, Car sales forecasting using artificial neural networks and analytical hierarchy process, DATA ANALYTICS 2016 (2016) 69.
Knowledge-graph-based-rich-and-confidentiality-preserving-Ex_2022_Informatio
z = [zT, zAd , zAs, zN] ∈ R64×64×12 (7) Samples from our diffusion model (after being decoded through each D) can be seen in the left part of Fig. 1. 3.3. Inference We use the aforementioned trained diffusion model to perform inpainting on both the texture and reflectance UV (6) 4 Denoise + MCG correction stepTextureForward diffusion3DMM fittingInverse maskReconstructed 3D avatarDiffusion-based texture and reflectance inpaintingKnown partial texture / Guiding inputNormalsTextureTextureNormalsDiffuse AlbedoSpecular AlbedoSpecular AlbedoDiffuse AlbedoTextureNormalsSpecular AlbedoDiffuse Albedo Figure 3. Examples of 3D reconstructions by our method, rendered using different environment maps in a commercial renderer [50]. maps based on a partial UV texture obtained by 3DMM fit- ting. We provide a detailed description below. the denoising procedure of MCG [11], consisting of the fol- lowing repeated steps:
Relightify-Relightable3DFacesfromaSingleImageviaDiffusionModels
Figure 8: Expert Parallelism as described in Gshard paper Data scientists have deployed multiple replicas of Expert Parallel distribution, known as Expert Parallel Replica, to increase training throughput when larger number of GPUs are available. Under this strategy, like traditional data parallel training, experts from each EP replica are synchronized by additional AllReduce collective. Skip to Primary Navigation Skip to Main Content undefined  Sign In  Expert Slicing: A few drawbacks of EP are: (1) to maintain the load-balance, the number of experts must be larger and dividable by the number of GPUs. (2) Padding of the tokens may needed, when the number of tokens routed to an expert is fewer than the expert capacity, which incurs waste of memory and computation. (3) Each expert has a capacity of number of tokens taking in, which results in tokens exceeding the capacity are dropped, and affects the learning efficiency.
Scaling Speech, Language and Vision Models with Mixture of Experts Technique - Microsoft Community Hub
in this comic mistakenly took the dinosaur sculpture in theamusement park for a real dinosaur. Nervously, he shouted, "Help! Thedinosaur is coming!" However, in the next panel, we see a staff membercalmly responding, "Don't panic, it's fake." > LLaVA-1.5: 一个男人站在船上,指着坐在椅子上的女人。@ A manis standing on a boat and pointing at a woman who is sitting in a chair. > MiniGPT-v2: 一个男人正在演奏陶笛,但它实际上是一个玩具,面前的一块假石头后面藏着一个怪物!@ A man is playing an ocarina, butit's actually a toy with a monster hiding behind a fake rock in front of him!> mPLUG-Owl: 一个人走进一家酒吧,看到一个人在桌子上吃东西,他走过去问那个人:“为什么你的食物是放在这里的?”那人笑着回答:“因为我喜欢吃这里的食物。” @ A person walks into a bar and seessomeone eating at a table. He approaches and asks, "Why is your foodplaced here?" The person laughs and replies, "Because I like the food here." > VisualGLM-6B: 本图是《名侦探柯南》中毛利小五郎和工藤新一的第一次对话。@ This scene depicts the first conversation between KogoroMori and Shinichi Kudo in "Detective Conan". > Qwen-VL: 哎呀,我的眼镜呢?原来它被我丢在这里,让我来取回它吧!@ Oh shoot, where are my
Let’sThinkOutsidetheBox
Meta-learning has the potential to improve speech processing tasks by learning better learning algorithms that can adapt to new tasks and data more efficiently. Meta-learning can also reduce the cost of model training and fine-tuning, which is particularly useful for low-resource speech processing tasks. Further investigation is required to delve into the full potential of meta-learning in speech processing and to develop more effective meta-learning algorithms for different speech- processing tasks.
AReviewofDeepLearningTechniquesforSpeechProcessing
Hella- Swag 0.518 0.535 0.292 0.320 0.415 0.458 0.505 0.524 0.270 0.293 0.333 0.376 0.398 0.451 0.482 0.505 0.273 0.294 0.341 0.387 0.403 0.466 0.488 0.516 0.268 0.274 0.291 0.325 0.386 0.447 0.513 0.268 0.274 0.295 0.334 0.388 PIQA Wino- Grande 0.640 0.661 0.503 0.523 0.595 0.610 0.654 0.651 0.491 0.519 0.530 0.545 0.565 0.612 0.611 0.645 0.526 0.509 0.534 0.546 0.561 0.612 0.636 0.639 0.488 0.511 0.498 0.521 0.559 0.602 0.646 0.519 0.505 0.517 0.512 0.557 0.752 0.779 0.630 0.644 0.717 0.738 0.763 0.759 0.590 0.627 0.668 0.705 0.711 0.737 0.746 0.761 0.607 0.632 0.668 0.712 0.729 0.743 0.756 0.761 0.594 0.613 0.627 0.664 0.701 0.739 0.766 0.598 0.617 0.644 0.682 0.697 0.683 0.720 0.379 0.452 0.579 0.637 0.677 0.687 0.259 0.389 0.505 0.566 0.604 0.654 0.679 0.705 0.257 0.370 0.514 0.585 0.610 0.672 0.695 0.712 0.194 0.293 0.366 0.462 0.567 0.636 0.696 0.204 0.287 0.362 0.471 0.558
Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. 11
Self-Extend LLM
[142] Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language Models as Knowledge Bases?. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 2463–2473. https://doi.org/10.18653/v1/D19-1250 [143] Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis Only Baselines in Natural Language Inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. 180–191. [144] Kashyap Popat, Subhabrata Mukherjee, Jannik Strötgen, and Gerhard Weikum. 2016. Credibility Assessment of Textual Claims on the Web. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. 2173–2178.
SurveyofHallucinationinNatural Language Generation
On the importance of joint text-to-image and text-to-video training While there are some text- video datasets, text-image datasets dominate the internet in terms of quality and quantity [34]. Con- sequently, there is simply not enough video data available to cover all the concepts present in text- image datasets. For example using only our video data, concepts such as pencil drawings or different painting styles cannot be learned. To be able to learn a model that can combine video dynamics with these additional concepts we have to combine training on image and video data. In Table 2, we evaluate the performance of using different ratios of video and images. We start with data splits of only video, and vary the ratio of image and video datasets up to using 50% image and 50% video datasets. In our results, we find that there is a trade-off in performance between models trained with only video video (i.e., significantly better FVD), and models trained with more image data
PHENAKI- VARIABLE LENGTH VIDEO GENERATION FROM OPEN DOMAIN TEXTUAL DESCRIPTIONS
Huge pretrained language models (LMs) have demonstrated surprisingly good zero-shot capabilities on a wide variety of tasks. This gives rise to the appealing vision of a single, versatile model with a wide range of functionalities across dis- parate applications. However, current leading techniques for leveraging a “frozen” LM—i.e., leaving its weights untouched—still often underperform fine-tuning approaches which modify these weights in a task-dependent way. Those, in turn, suffer forgetfulness and compromise versatility, suggesting a tradeoff between performance and versatility. The main message of this paper is that current frozen- model techniques such as prompt tuning are only the tip of the iceberg, and more powerful methods for leveraging frozen LMs can do just as well as fine tuning in challenging domains without sacrificing the underlying model’s versatility. To demonstrate this, we introduce three novel methods for leveraging frozen models:
STANDING ON THE SHOULDERS OF GIANT FROZEN LANGUAGE MODELS
3 2 0 2 g u A 4 1 ] L C . s c [ 2 v 9 5 2 6 0 . 8 0 3 2 : v i X r a Self-Alignment with Instruction Backtranslation Xian Li Ping Yu Chunting Zhou Timo Schick Luke Zettlemoyer Omer Levy Jason Weston Mike Lewis Meta AI Abstract
Self-AlignmentwithInstructionBacktranslation