text
stringlengths
1
1k
title
stringclasses
230 values
In this section we study the generalization of our features on downstream classification benchmarks. We consider two sets of evaluations in that context. On one hand, we use large and finegrained datasets such as iNaturalist and Places205. On the other, we use the 12 image classification tasks originally proposed in SimCLR (Chen et al., 2020). For iNaturalist 2018, iNaturalist 2021, and Places205, we train a linear classifier with data augmentations as in Sec. 7.1 We report top-1 accuracy for those three datasets in Table 7. Interestingly, our model significantly outperforms OpenCLIP ViT-G/14 on both variants of iNaturalist (+8.6% and +9.7% for 2018 and 2021 respectively), and lags slightly behind on Places 205 (−2.3%). In a second set of evaluations, we measure the performance of our model on video action recognition even though our features were not trained on videos.. We evaluated features on three datasets, namely UCF-
DINOv2- Learning Robust Visual Features without Supervision
Continual learning. Recent studies [190; 272] have highlighted the potential of LLMs’ planning capabilities in facilitating continuous learning [196; 197] for agents, which involves continuous acquisition and update of skills. A core challenge in continual learning is catastrophic forgetting [273]: as a model learns new tasks, it tends to lose knowledge from previous tasks. Numerous efforts have been devoted to addressing the above challenge, which can be broadly separated into three groups, introducing regularly used terms in reference to the previous model [274; 275; 276; 277], approximating prior data distributions [278; 279; 280], and designing architectures with task-adaptive parameters [281; 198]. LLM-based agents have emerged as a novel paradigm, leveraging the planning capabilities of LLMs to combine existing skills and address more intricate challenges. Voyager [190] attempts to solve progressively harder tasks proposed by the automatic curriculum devised by GPT-4
TheRiseandPotentialofLargeLanguageModel BasedAgents
transcriptions. Individual samples of the AMI dataset contain very large audio files between 10 and 60 minutes in duration. We segment the audio samples according the the Kaldi (Povey et al., 2011) recipe for AMI3 to yield utterance of suitable length for training ASR systems. This involves splitting samples longer than 30 words at the time-stamps for punctuation to yield shorter utterances. We use the individual headset microphone (AMI IHM) and single distant microphone (AMI SDM) versions of the dataset, with the train, validation and test sets provided therein.
DISTIL-WHISPER
Table 10: Qualitative examples from WebNLG. The first 6 examples are from the unseen categories, labeled next to source; the last two examples are from the seen categories. For unseen categories, both prefix-tuning and fine- tuning tend to undergenerate (generated output do not cover full table contents) or generate untruthfully (generated output is inconsistent with table contents). In particular, prefix-tuning tends to undergenerate more often than generate untruthfully whereas fine-tuning tends to generate untruthfully. For seen categories, both perform fairly well in terms of coverage and truthfulness. 4597
Prefix-Tuning
led model training and evaluation for controlled sentiment generation and summarization; design iterations for GPT-4 evaluation (particularly summarization); substantial writing contributions to abstract, prelims/method and experiments; editing contributions to other sections. EM provided input on early discussions on learning autoregressive reward functions; wrote the first implementation of DPO and ran the first DPO experiments; trained the large-scale (summarization and dialogue) DPO models used in paper experiments; conducted initial GPT-4 win rate evaluations and set up related infrastructure; recruited participants for, conducted, and analyzed results from the human study; wrote the abstract, introduction, related work, discussion, and most of experiments; and assisted with editing the rest of the paper. CF, CM, & SE supervised the research, suggested ideas and experiments, and assisted in writing the paper.
Direct Preference Optimization
the behavior of LLMs. 5. Experts are not yet able to interpret the inner workings of LLMs. 6. Human performance on a task isn’t an upper bound on LLM performance. 7. LLMs need not express the values of their creators nor the values encoded in web text. 8. Brief interactions with LLMs are often mis- leading. Introduction Large language models (LLMs, e.g. GPT-3, PALM, LLaMA, and GPT-4; Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023; OpenAI, 2023b) and products built on them, such as ChatGPT, have recently prompted an enormous amount of attention from journalists, (Klein, 2023; Perrigo, 2023; Oliver, 2023), policymakers (J & C, 2023; Bartz, 2023; Lieu, 2023), and scholars from many 1New York University 2Anthropic, PBC. Correspondence to: Samuel R. Bowman <[email protected]>.
Eight Things to Know about Large Language Models
6 CONCLUSION AND FUTURE CHALLENGES Recent advances in large language models have been revolutionizing the field of natural language processing. Effectively using LLMs requires understanding their capabilities, and limitations for various NLP tasks. This work presents a practical guide to working with LLMs for downstream NLP tasks. We first discuss prominent models like GPT-style and BERT-style architectures and the factors influencing their performance. We then explore using LLMs for downstream tasks, including knowledge-intensive tasks, NLU, and NLG tasks, as well as providing concrete examples of successes and limitations. This practical guide offers insights into LLMs and best practices for harnessing LLMs across NLP tasks. We hope it would enable researchers and practitioners to leverage their potential and drive innovation in language technologies.
Harnessing the Power of LLMs in Practice- A Survey on ChatGPT and Beyond
• The volume of data in Delta Lake has grown 304% YoY • The Lakehouse is increasingly being used for data warehousing, including serverless data warehousing with Databricks SQL, which grew 144% YoY 6 2023 STATE OF DATA + AI Methodology: How did Databricks create this report? The 2023 State of Data + AI is built from fully-aggregated, anonymized data collected from our customers based on how they are using the Databricks Lakehouse and its broad ecosystem of integrated tools. This report focuses on machine learning adoption, data architecture (integrations and migrations) and use cases. The customers in this report represent every major industry and range in size from startups to many of the world’s largest enterprises. Unless otherwise noted, this report presents and analyzes data from February 1, 2022, to January 31, 2023, and usage is measured by number of customers. When possible, we provide YoY comparisons to showcase growth trends over time.
2023 state of ai databrick
elements: 1) an encoder which learns a feature representation of the inputs using two layers of Transformers and 2) a decoder which combines the last predicted note and the encoded representation as input and feeds them to one unidirec- tional LSTM to produce the final output which is the predicted next note. They demonstrated from a listening test that generated music pieces from their pro- posed model are rated as good as or better than the music pieces from human composers. In very recent work, Transformer architectures have also been used in Diffu- sion networks for monophonic symbolic music generation (Mittal et al., 2021), 8 which further shows their ability to model music. In this work, we will be conditioning our Transformer network on video features. While many Transformer-based music generation models primarily focus on generating MIDI files, our proposed model generates chord sequences
Video2Music
t u r e s H e r n a n d e z , E . , S c h w e t t m a n n , S . , B a u , D . , B a g a s h v i l i , T . , T o r r a l b a , A . a n d A n d r e a s , J . , 2 0 2 2 . I n t e r n a t i o n a l C o n f e r e n c e o n L e a r n i n g R e p r e s e n t a t i o n s .
Language models can explain neurons in language models
of knowledge and needs, ethical concerns, and the impersonal interaction.
Adoptionand AppropriationofLLMs
In music composition, the arrangement of a piece typically follows a gradual introduction, a main body with the core content, and a gradual conclu- sion, also called the sonata form (Webster, 2001). Accordingly, we look into whether our generated music also shows such a long-term structure. Us- ing the same text prompt, we can generate different segments/intervals of it by attaching the expression “1/2/3/4 out of 4” at the end of the text prompt, such as “Italian Hip Hop 2022, 3 of 4.” We randomly generate 1,000 music pieces, where the prompts are from a uniform distribution of the four segment tags. We visualize the results in Figure 6, where we see the first segment shows a gradual increase in both the average amplitude and variance, fol- lowed by continuously high average amplitude and variance throughout Segments 2 and 3, and finally concluding with a gradual decline in the last seg- ment.
MOUSAI
consistent motion as opposed to the 1B model 5 roses and distorting objects produced by the 1B model. Overall, scal- ing the model improved temporal consistency, prompt fi- delity, and motion dynamics while adding capabilities for limited text rendering, spatial understanding, and counting. A.4. Stylization Evaluation on DAVIS To evaluate the CLIP similarity score and human preference on video stylization, we use the following set of videos and prompts. We select 20 videos from DAVIS 2016 [43], and for each video we take 16 frames starting from the initial
VideoPoet
We represent each API call as a tuple c = (ac, ic) where ac is the name of the API and ic is the cor- responding input. Given an API call c with a cor- responding result r, we denote the linearized se- quences of the API call not including and including its result, respectively, as: e(c) = <API> ac(ic) </API> e(c, r) = <API> ac(ic) → r </API>
Toolformer
[80] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word 2021. representations. arXiv, 2018. [81] Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. Is chatgpt a general-purpose natural language [82] Shilin Qiu, Qihe Liu, Shijie Zhou, and Wen Huang. Adversarial attack and defense technologies in natural language processing: A survey. processing task solver? arXiv preprint arXiv:2302.06476, 2023. Neurocomputing, 492:278–307, 2022. [83] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021. [84] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the
Harnessing the Power of LLMs in Practice- A Survey on ChatGPT and Beyond
Here, concerns about balancing Type 1 and Type 2 errors disappear. Preregistration mitigates risks associated with research, reducing potential harms, but at the cost of scientific progress. This calls for a cost-benefit analysis: How much risk can be tolerated for what potential gains?
A Two-Sided Discussion of Preregistration of NLP Research
F.4 Ablations In Table 18, we report key-retrieval accuracy for ablations performed on an earlier version of our 7B model. Without long context fine-tuning, retrieval is possible on sequence lengths seen during training only (4,096); increasing RoPE’s base period θ for inference only has no effect here. Performing LCFT without changing the base period results in failure to retrieve far-away keys at a context length of 8,000 already, despite fine-tuning with a 16,384 sequence length. This failure suggests that adapting the rotation frequencies is indeed necessary. We evaluate frequency scaling with a factor of 1/4 (Chen et al., 2023b), corresponding to the 4x increase of sequence length during fine-tuning. Retrieval performance at 16,00 tokens for keys placed at the beginning is low in this configuration, and extrapolation to longer sequences fails. G Prompts G.1 Self training prompts
CodeLlama2
3 STABILIZING TRAINING OF SPARSE MODELS Sparse models often suffer from training instabilities (Figure 1) worse than those observed in stan- dard densely-activated Transformers. Figure 1: Training instabilities for sparse models. We refer to training instabilities as divergences in the training loss. Above are two runs from sparse models FLOP-matched to the T5-XL version (Raffel et al., 2019) each trained with a batch size of 1M tokens using the Adafactor optimizer (Shazeer and Stern, 2018). (Left) An unstable training run. (Right) A stable training run.
ST-MOE- DESIGNING STABLE AND TRANSFERABLE SPARSE EXPERT MODELS
A.3.2 Curriculum Strategy for Meta Human Preference Data High quality data is critical for alignment as discussed for SFT. We worked closely with the annotation platforms during our fine-tuning process, and opted for a curriculum annotation strategy. With the first model, the annotators were asked to make prompts relatively simple, and then to progressively move towards more complex prompts and teaching new skills to Llama 2-Chat. An illustration of this curriculum annotation on our helpfulness preference data is displayed in Figure 26.
Llama2
modality generation quality using widely available modality-specific training data (i.e., data with one or more modalities as input and one modality as output). For conditional cross-modality generation, such as generating images using audio+language prompts, the input modalities are projected into a shared feature space (Section 3.2), and the output LDM attends to the combination of input features. This multimodal conditioning mechanism prepares the diffusion model to condition on any modality or combination of modalities without directly training for such settings. The second stage of training enables the model to handle many-to-many generation strategies that involve simultaneously generating arbitrary combinations of output modalities. To the best of our knowledge, CoDi is the first AI model with this capability. This is achieved by adding a cross- attention module to each diffuser, and an environment encoder V to project the latent variable of
Any-to-Any Generation via Composable Diffusion
7 System design System design is critical in optimizing Large Language Models (LLMs) like the GPT series for efficient inference, particularly in resource-constrained environments. This section explores key strategies such as hardware offloading, which manages computa- tional resources by leveraging different storage hierarchies, and collaborative inference, which pools resources for enhanced processing capabilities. It also examines the adap- tation of LLMs for edge devices, highlighting the importance of system design in maximizing the efficiency and scalability of LLMs across various deployment scenarios. 7.1 Deployment optimization
Beyond Efficiency
4.1 Methodology To ensure a fair comparison across datasets of dif- ferent sizes, we decontaminate any instances of the evaluation sets using the same 13-gram overlap fil- tering as in Brown et al. (2020) and downsample to 40GB to control for dataset size. As we control for dataset size, we emphasize that our evaluation is generous to CC-100 (en), which is about 1/3 the size of the Pile in reality. We compare the following datasets: the Pile, the En- 7 Component GPT-2 GPT-3 Pile-CC PubMed Central Books3 OpenWebText2 ArXiv Github FreeLaw Stack Exchange USPTO Backgrounds PubMed Abstracts Gutenberg (PG-19) OpenSubtitles Wikipedia (en) DM Mathematics Ubuntu IRC BookCorpus2 EuroParl HackerNews YoutubeSubtitles PhilPapers NIH ExPorter Enron Emails The Pile small 1.0878 1.0759 1.1959 1.1111 1.3548 1.7912 1.0512 1.2981 0.8288 0.9524 1.2655 1.2465 1.1285 2.6911 1.8466 1.1295 2.3177 1.4433 2.0387 1.3203 0.9099 1.5888 1.2253
The Pile- An 800GB Dataset of Diverse Text for Language Modeling
5 Limitations Although MiniGPT-4 processes numerous advanced vision-language capabilities, as displayed in our demonstrations, it currently still faces several limitations. Language hallucination. As MiniGPT-4 is built upon LLMs, it inherits LLM’s limitations like unreliable reasoning ability and hallucinating nonexistent knowledge. This issue might be alleviated 5 Figure 2: Detailed image descriptions
MiniGPT-4- Enhancing Vision-Language Understanding with Advanced Large Language Models
Concrete problems in ai safety. [Askell et al., 2021] Askell, A., Bai, Y., Chen, A., Drain, D., Ganguli, D., Henighan, T., Jones, A., Joseph, N., Mann, B., DasSarma, N., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Kernion, J., Ndousse, K., Olsson, C., Amodei, D., Brown, T., Clark, J., McCandlish, S., Olah, C., and Kaplan, J. (2021). A general language assistant as a laboratory for alignment. [Bender et al., 2021] Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? ᅵᅵ. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pages 610–623, New York, NY, USA. Association for Computing Machinery.
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
y bilit a t e r p r e t In int int int int pos pos 4.3. Recommender systems Knowledge graphs to provide more transparent results to models’ outputs have recently experienced a take-up also in the area of recommender systems, with the goal of enhancing the users’ experience in terms of satisfaction, trust, and loyalty. Most of the approaches are content-based, i.e. they consists of explaining a recommendation with entities from a given knowledge graph in the form of images or natural language sentences.
Knowledge graphs as tools for explainable machine learning: A survey
sha1_base64="0Q3PNdwUTyjvy3/Zd46cnh2h4C0=">AAACAHicbVDLSsNAFJ34rPUVdeHCzWARqouSiKDLghuXFexDmhgm00k7dGYSZiZCCdn4K25cKOLWz3Dn3zhps9DWAxcO59zLvfeECaNKO863tbS8srq2Xtmobm5t7+zae/sdFacSkzaOWSx7IVKEUUHammpGeokkiIeMdMPxdeF3H4lUNBZ3epIQn6OhoBHFSBspsA89RYccwbrHkR6FUdbLA/pwdhrYNafhTAEXiVuSGijRCuwvbxDjlBOhMUNK9V0n0X6GpKaYkbzqpYokCI/RkPQNFYgT5WfTB3J4YpQBjGJpSmg4VX9PZIgrNeGh6SzOVPNeIf7n9VMdXfkZFUmqicCzRVHKoI5hkQYcUEmwZhNDEJbU3ArxCEmEtcmsakJw519eJJ3zhus03NuLWvO+jKMCjsAxqAMXXIImuAEt0AYY5OAZvII368l6sd6tj1nrklXOHIA/sD5/AM9glfQ=</latexit><latexit
BANMo- Building Animatable 3D Neural Models from Many Casual Videos
cleaning [54, 60]. Training for Aesthetics and CLIP im- proves those capabilities more specifically, in the case of Aesthetics at the expense of CLIP. The ability to train for text-image alignment via CLIP is a noted improvement over prior work [7]. Moreover, training SD1.5 on the pseudo- labeled PickScore dataset (β = 5000, 2000 steps) outper- forms training on the raw labels. On the General Preference Partiprompt question, the win-rate of DPO increases from 59.8% to 63.3%, indicating that learning from AI feedback can be a promising direction for diffusion model alignment. 5.5. Analysis Implicit Reward Model As a consequence of the theo- retical framework, our DPO scheme implicitly learns a re- ward model and can estimate the differences in rewards be- tween two images by taking an expectation over the inner term of Eq. (14) (details in Supp. S4.1). We estimate over 10 random t ∼ U{0, 1} Our learned models (DPO-SD1.5
DiffusionModelAlignmentUsing Direct Preference Optimization
Katja Grace et al. “Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts”. en. In: Journal of Artificial Intelligence Research 62 (July 2018), pp. 729–754. ISSN: 1076-9757. DOI: 10.1613/jair.1.11222. URL: http://jair.org/index. php/jair/article/view/11222 (visited on 04/29/2022). Katja Grace. Misalignment and misuse: whose values are manifest? en-US. Section: Blog. Nov. 2020. URL: https://aiimpacts.org/misalignment-and-misuse-whose-values- are-manifest/ (visited on 04/29/2022). Joseph Henrich. The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. English. Princeton: Princeton University Press, Oct. 2015. ISBN: 978-0-691-16685-8. Evan Hubinger. Clarifying inner alignment terminology - AI Alignment Forum. URL: https:// www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF /clarifying- inner- alignment-terminology (visited on 04/29/2022).
Is Power-Seeking AI an Existential Risk?
sample N p = 6144 pixels from all image pairs for render- ing. The interval between image pairs is randomly chosen ∆T ∈ {1, 2, 4, 8, 16, 32}. To stabilize optimization, we ob- serve that NI needs to roughly match the number of input frames. The reconstruction quality improves with more iter- ations and we find 36k iterations (15 hours on a V100 GPU) already produces high-fidelity details. Please find a list of hyper-parameters in supplement.
BANMo- Building Animatable 3D Neural Models from Many Casual Videos
sha1_base64="/NxVbjiSFkKRfDP6dqe151Iuji8=">AAAB+HicbVDLSgNBEOz1GeMjqx69DAYhXsKuCHoMePEYwTwkiWF2MpsMmX0w0yvGJV/ixYMiXv0Ub/6Ns8keNLFgoKjqpmvKi6XQ6Djf1srq2vrGZmGruL2zu1ey9w+aOkoU4w0WyUi1Paq5FCFvoEDJ27HiNPAkb3njq8xvPXClRRTe4iTmvYAOQ+ELRtFIfbvEKt2A4sjz08fpPZ727bJTdWYgy8TNSRly1Pv2V3cQsSTgITJJte64Toy9lCoUTPJpsZtoHlM2pkPeMTSkAde9dBZ8Sk6MMiB+pMwLkczU3xspDbSeBJ6ZzELqRS8T//M6CfqXvVSEcYI8ZPNDfiIJRiRrgQyE4gzlxBDKlDBZCRtRRRmaroqmBHfxy8ukeVZ1nap7c16u3eV1FOAIjqECLlxADa6hDg1gkMAzvMKb9WS9WO/Wx3x0xcp3DuEPrM8fmWuTHA==</latexit><latexit
BANMo- Building Animatable 3D Neural Models from Many Casual Videos
prompt for a pre-trained text-to-video model. Our approach has the following appealing advantages: • Instruction-Followed Video Understanding: The pro- posed GPT4Video effectively harnesses the robust con- textual summarization and textual expression capabilities of LLM to generate detailed prompts for videos, with such detail-rich prompts proven to be crucial for the out- comes of generative models [16].
GPT4Video
Transparency Reports Many platforms publish periodic transparency reports, which typically disclose aggregate data about requests for content removal. An index of transparency reports maintained by the civil society organization Access Now lists reports from more than seventy companies,14 including Google,15 Facebook,16 Twitter,17 Amazon,18 Tumblr,19 Medium,20 Reddit,21 Github,22 and WordPress.23 These can provide important quantitative overviews of the big picture – or at least part of it. They typically aggregate data about removal requests, along with the platform’s rate of compliance. They may also disclose the frequency with which users accused of wrongdoing choose to appeal or challenge platforms’ decisions. Transparency reports have historically focused on legal removal requests. In 2018, however, Facebook,24 Twitter,25 and YouTube26 all published their first Community Guidelines enforcement reports.
Social_Media_and_Democracy
4 −4−3−2−1012OutputMagnitude(beforeReLU)CountFalseNegativeUpProjectionPredictorNLow Rank PredictorMMNMRReLUsigmoid
> 0.5Up Projection
(FC)001010...00N= d modelM = dffn (a) aggregated neuron use (b) sliding window Figure 4: (a) Aggregated neuron use of the tenth layer of Falcon 7B, as it can be seen the slop of aggregated neuron use is decreasing. Other layers exhibit the same pattern. (b) Instead of deleting neurons that brought to DRAM we keep the active neurons of past 5 tokens: when the new token "Was" is being processed only a few amount of data needs to be changed. 3.2 Improving Transfer Throughput with Increased Chunk Sizes To increase data throughput from flash memory, it is crucial to read data in more substantial chunks. In this section, we detail the strategy we have em- ployed to augment the chunk sizes for more effi- cient flash memory reads.
LLM in a flash
significant breakthroughs have been achieved in the development of multimodal generative models, e.g. models that can generate images from text. Technological advancement in this direction will probably have significant influence on the production and creation of art. Models that can translate data from different modalities into a joint semantic space represent an interesting tool for artistic exploration because the concept of multimodality is integral to many art forms and has always played an important role in the creative process. Furthermore, it is evident that the increasing use of AI technologies in the creation of art will have significant implications regarding the questions related to authorship, as well as on our human perception of art. With the development of AI models that can generate content which very convincingly imitates human textual, visual or musical creations, many of our traditional, as well as contemporary,
UNDERSTANDINGANDCREATINGARTWITHAI-REVIEWAND OUTLOOK
[341] Carlini, N., J. Hayes, M. Nasr, et al. Extracting training data from diffusion models. CoRR, abs/2301.13188, 2023. 67 [342] Savelka, J., K. D. Ashley, M. A. Gray, et al. Can GPT-4 support analysis of textual data in tasks requiring highly specialized domain expertise? In F. Lagioia, J. Mumford, D. Odekerken, H. Westermann, eds., Proceedings of the 6th Workshop on Automated Semantic Analysis of Information in Legal Text co-located with the 19th International Conference on Artificial Intelligence and Law (ICAIL 2023), Braga, Portugal, 23rd September, 2023, vol. 3441 of CEUR Workshop Proceedings, pages 1–12. CEUR-WS.org, 2023. [343] Ling, C., X. Zhao, J. Lu, et al. Domain specialization as the key to make large language models disruptive: A comprehensive survey, 2023. [344] Linardatos, P., V. Papastefanopoulos, S. Kotsiantis. Explainable AI: A review of machine learning interpretability methods. Entropy, 23(1):18, 2021.
TheRiseandPotentialofLargeLanguageModel BasedAgents
Other Categories and Types of Hallucinations. Raunak et al. [153] propose an alternative catego- rization of hallucinations. They divide hallucinations into hallucinations under perturbations and natural hallucinations. Hallucinations under perturbation are those that can be observed if a model tested on the perturbed and unperturbed test set returns drastically different content. Their work on hallucinations under perturbation strictly follows the algorithm proposed by Lee et al. [95]; see Section 11.2.2 on the entropy measure. The second category, natural hallucinations, are created with a connection to the noise in the dataset and can be further divided into detached and oscillatory, where detached hallucinations mean that a target translation is semantically disconnected from a source input, and oscillatory hallucinations mean those that are decoupled from the source by manifesting a repeating n-gram. Tu et al. [187] and Kong et al. [87] analyze this phenomenon under
SurveyofHallucinationinNatural Language Generation
4. code-cushman-001 is a 12B parameter model by OpenAI and was the initial model for GitHub Copilot (Chen et al., 2021). The details of its training set are unknown. This model has been deprecated by OpenAI but was available from the Microsoft Azure OpenAI Service at the time of writing.13 5. Finally, although they are not specifically trained for code generation, we include some results from the LLaMA (Touvron et al., 2023), PaLM (Chowdhery et al., 2022), and LaMDA (Thoppilan et al., 2022) papers. LLaMA’s license prohibits commercial use, and PaLM and LaMDA are not publicly available. 13There had been a code-cushman-002, but it is not available at the time of writing. 17 Model LLaMA-7B LaMDA-137B LLaMA-13B CodeGen-16B-Multi LLaMA-33B CodeGeeX LLaMA-65B PaLM-540B CodeGen-16B-Mono StarCoderBase code-cushman-001 StarCoder StarCoder-Prompted HumanEval MBPP 17.7 14.8 22.0 20.9 30.2 24.4 37.7 36.8 35.3 49.0 45.9 52.7 49.5
StarCoder_paper (1)
<jupyter_start><jupyter_text>TEXT<jupyter_code>CODE <jupyter_output>OUTPUT<jupyter_text> ... Git commits We separated the code before the commit, the commit message, and the code after the commit with sentinel tokens. We included the full code with changes instead of diffs, as early experiments suggested that the diff format was difficult to output for smaller models. See Section 3.4 for more details. <commit_before>code<commit_msg>text<commit_after>code<eos> We summarize all sentinel tokens in Table 10. 5 . 2 T R A I N I N G D ATA D E C O N TA M I N AT I O N The code training data was decontaminated by removing files that contained docstrings or solutions from HumanEval and MBPP, docstrings from APPS, questions from GSM8K, or prompts from DS1000. (These benchmarks are further described in Section 6.) To give an indication of the amount of data removed by decontamination, Python is the language with the highest number of matches, with 558 files removed. 5 . 3 T O K E N I Z E R
StarCoder_paper (1)
Reddit, Inc. (2015). Reddit, Inc. Transparency Report, 2015. www.reddit.com/wiki/ transparency/2015 Roberts, S. T. (2016). Commercial content moderation: Digital laborers’ dirty work. Media Studies Publications, Paper No. 12. https://ir.lib.uwo.ca/cgi/viewcontent .cgi?article=1012&context=commpub (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press. Seng, D., (2015). “Who watches the watchmen?” An empirical analysis of errors in SSRN. https://papers.ssrn.com/sol3/papers.cfm? DMCA takedown notices. abstract_id=2563202 Taub, A., & Fisher, M. (2018). Facebook fueled anti-refugee attacks in Germany, new research suggests. New York Times, August 21. www.nytimes.com/2018/08/21/ world/europe/facebook-refugee-attacks-germany.html
Social_Media_and_Democracy
Does your application use case require rigor, precision and is in a zero-mistakes allowed environment? Or are you deploying closer to the end consumer with a more forgiving experience yet the need to offer refreshing thoughts? While exceptions are always the rule, often fintech founders impress us with a deep understanding of the problem space and bring relevant niche experience. Mixed with a strong technical profile, this allows you to bring a quick go-to-market through nailing the tone as well as providing a technical sound solution. Lastly, as MOATs are continuously being redefined, teams that are capable of listening, observing and quickly adapting while also being true to their first- principles thinking, have the best chances to succeed. https://medium.com/lightspeed-venture-partners/fintech-x-ai-the-lightspeed-view-b515fae5bfb6 6/15 23/06/2023, 17:55 Fintech x AI: The Lightspeed View | by Lightspeed | Lightspeed Venture Partners | Jun, 2023 | Medium
Fintech x AI_ The Lightspeed View _ by Lightspeed _ Lightspeed Venture Partners _ Jun, 2023 _ Medium
2011 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 5528–5531. [187] Swaroop Mishra and Bhavdeep Singh Sachdeva. 2020. Do we need to create big datasets to learn a task?. In SustaiNLP Workshop. 169–173. [188] Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. 2023. Scaling Data-Constrained Language Models. arXiv preprint arXiv:2305.16264 (2023). [189] Basil Mustafa, Carlos Riquelme, Joan Puigcerver, Rodolphe Jenatton, and Neil Houlsby. 2022. Multimodal contrastive learning with limoe: the language-image mixture of experts. NeurIPS 35 (2022), 9564–9576. [190] Lakshmi Nair, Mikhail Bernadskiy, Arulselvan Madhavan, Craig Chan, Ayon Basumallik, and Darius Bunandar. 2023. INT-FP-QSim: Mixed Precision and Formats For Large Language Models and Vision Transformers. arXiv preprint arXiv:2307.03712 (2023).
TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey
Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Proceedings of Conference on Health, Inference, and Learning, 2022. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, 2002. Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023.
ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup
As we see above, both improved language model capabilities and limitations can pose significant challenges to the responsible and safe societal adoption of these models. To ensure that we are all well-prepared for the pace of progress, we need more research emphasis on areas such as AI literacy, economic and social resilience, and anticipatory governance.[11] It is very important that OpenAI, other labs, and academia further develop effective evaluation tools and technical improvements in model safety. Progress has been made in the last few years, and more investment in safety will likely produce more gains. We encourage readers interested in this topic to read our work on language model impacts in areas such as disinformation, misuse, education, and economy and labor market. 29
gpt-4-system-card
5.2 From Tool User to Tool Maker: AI’s Evolutionary Role Throughout the annals of human civilization, the evolution of tools has occupied a pivotal position (Mithen, 1996; Ko, 2016). The Stone Age, in particular, witnessed the emergence of stone-based weaponry and hunting tools, which afforded humans a competitive edge over their animal counterparts. Subsequent epochs of human history were equally marked by significant societal transformations made possible by the introduction of novel tools. Notably, the invention of the steam engine heralded the onset of the first industrial revolution, while 29 5.2 From Tool User to Tool Maker: AI’s Evolutionary Role Figure 8: Example of AI tool creation, where we ask ChatGPT to encapsulate a weather forecast API into a new function suited for a specific target.
Tool Learning with Foundation Models
resulting in notable advancements across many tasks such as speech recognition and audio QA tasks. • Output Instruction: Lastly, we provide output instruction to further specify the task and desired format
Qwen-Audio
[53] Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, et al. 2006. The AMI meeting corpus: A pre-announcement. In Machine Learning for Multimodal Interaction: Second International Workshop, MLMI 2005, Edinburgh, UK, July 11-13, 2005, Revised Selected Papers 2. Springer, 28–39. [54] Paolo Castiglioni. 2005. Levinson-durbin algorithm. Encyclopedia of Biostatistics 4 (2005). [55] Andrew A Catellier and Stephen D Voran. 2020. Wawenets: A no-reference convolutional waveform-based approach to estimating narrowband and wideband speech quality. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 331–335. [56] Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2021. MuST-C: A multilingual corpus for end-to-end speech translation. Computer Speech & Language 66 (2021), 101155.
AReviewofDeepLearningTechniquesforSpeechProcessing
4. “Intelligence explosion”: that is, AI-driven feedback loops lead to explosive growth in frontier AI capabilities, at least for some period (on my definition, this need not be driven by a single AI system “improving itself”—see below; and note that the assumption that feedback loops explode, rather than peter out, requires justification).143 5. “Recursive self-improvement”: that is, some particular AI system applying its capabilities to improving itself, then repeatedly using its improved abilities to do this more (sometimes assumed or expected to lead to an intelligence explosion; though as above, feedback loops can just peter out instead).
Is Power-Seeking AI an Existential Risk?
[16] Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, and Kate Saenko. Are you looking? ground- ing to multiple modalities in vision-and-language navigation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6551–6557, Florence, Italy, July 2019. Association for Computational Linguistics. [17] Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, and Eugene Ie. Trans- ferable representation learning in vision-and-language navi- gation. In Proceedings of the IEEE/CVF International Con- ference on Computer Vision, pages 7404–7413, 2019. [18] Laurent Itti and Christof Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vi- sion Research, 40(10-12):1489–1506, 2000.
APriorityMapforVision-and-LanguageNavigation withTrajectoryPlansandFeature-LocationCues
Implications and Broader Context 6 We started with two hypotheses: a) that the emer- gence of nearly all functional linguistic abilities that has previously been observed is a consequence of in-context learning, and b) that the ability of LLMs to follow instructions when instruction- tuned is more likely to be indicative of instruc- tion tuning allowing for the more efficient use of in-context learning rather than leading to the emer- gence of reasoning skills. Results presented in Sec- tion 4 confirmed that there are indeed no emergent abilities in the absence of in-context learning. Sim- ilarly, results presented in Section 4.3 confirmed our second hypothesis.
AreEmergentAbilitiesinLarge Language Models just In-Context
10 Energy and Carbon Footprint Estimate of LaMDA
LaMDA- Language Models for Dialog Applications
D.3. Results After submissions we computed our score on each contest (including penalties) using the contests’ scoring system, and found where the model would have placed on the contests’ official scoreboards. Per-problem contest results can be found in Table A5. Overall contest results can be found in Table A6. In the second and third evaluations, we submitted more than 10 submissions per problem. We found that there were some problems we only solved with many samples. We also computed our estimated Codeforces Elo score by tracking what our Elo would have been if we started with the first contest, and competed in each contest in the order they were released, placing according to our calculated placement in Table A6. This was done separately for all three evaluations, and then averaged. Our Elo estimation is based on our reproduction of the Codeforces Elo method, as we didn’t compete live. We checked correctness by reproducing other participants’ Elo scores. Our approach largely
alphacode
5/12 14/11/2023, 13:39 The Future of Music: How Generative AI Is Transforming the Music Industry | Andreessen Horowitz that enables others to create new songs with her voice. She’s pledged to split royalties with any AI-created song that is able to generate revenue. TA B L E O F C O N T E N T S We expect to see infrastructure emerge to support this on a greater scale. For example, artists need a place to store their custom voice models, track AI covers, and understand streams and monetization across tracks. Some artists or producers may even want to use their voice models to test different lyrics, see how a given voice sounds on a song, or experiment with different collaborators on a track. Royalty-Free Tracks (aka AI Muzak)
The Future of Music_ How Generative AI Is Transforming the Music Industry _ Andreessen Horowitz
Learning conditional controls for large text-to-image dif- fusion models in an end-to-end way is challenging. The amount of training data for a specific condition may be sig- nificantly smaller than the data available for general text-to- image training. For instance, the largest datasets for various specific problems (e.g., object shape/normal, human pose extraction, etc.) are usually about 100K in size, which is 50,000 times smaller than the LAION-5B [79] dataset that was used to train Stable Diffusion [82]. The direct finetun- ing or continued training of a large pretrained model with limited data may cause overfitting and catastrophic forget- ting [31, 75]. Researchers have shown that such forgetting can be alleviated by restricting the number or rank of train- able parameters [14, 25, 31, 92]. For our problem, designing deeper or more customized neural architectures might be necessary for handling in-the-wild conditioning images with
AddingConditionalControltoText-to-ImageDiffusionModels
Figure 2: The final training data was curated to ensure a diverse distribution of prompt topics and model responses. 2.1 Reproducibility We release all data (including unused P3 genera- tions), training code, and model weights for the community to build upon. Please check the Git repository for the most up-to-date data, training details and checkpoints. 2.2 Costs We were able to produce these models with about four days work, $800 in GPU costs (rented from Lambda Labs and Paperspace) including several failed trains, and $500 in OpenAI API spend. Our released model, gpt4all-lora, can be trained in about eight hours on a Lambda Labs DGX A100 8x 80GB for a total cost of $100. 3 Evaluation
GPT4All- Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo
AI Performer and Human Validator. While autonomous AI agents reduce human’s cog- nitive workload and let them concentrate on other tasks, human (ethical) supervision is often needed. This design pattern is represented in Table 3 and its implementations are found in all four use cases. In the personalized care example (Sect. 4.4) a domain expert supervises the AI interactions with the patient to ensure a stable and safe environment. In the first response use case (Sect. 4.1) a fire fighter validates that the AI correctly inter- prets the situation, while in the maintenance scenario (Sect. 4.2) the technician validates E. van Zoelen et al. / the result of the suggested repairs. Lastly, in the wildlife monitoring scenario (Sect. 4.3) an expert validates the visual information provided by the AI and extends it by provid- ing further annotations. In all four examples, the AI uses the feedback received from the human actor to improve over time. Description
DevelopingTeamDesignPatternsfor HybridIntelligenceSystems
our use case, i.e., that the weights sum to unity, and there is no requirement of orthogonality, unlike in PCA.
Learning 3D Human Pose Estimation from Dozens of Datasets using a Geometry-Aware Autoencoder to Bridge Between Skeleton Formats
arXiv preprint arXiv:2309.05922, 2023. Paul Röttger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio, Federico Bianchi, and Dirk Hovy. Xstest: A test suite for identifying exaggerated safety behaviours in large language models. arXiv preprint arXiv:2308.01263, 2023. Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. WINOGRANDE: an adversarial winograd schema challenge at scale. arXiv preprint arXiv:1907.10641, 2019. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021.
ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup
and its correction, 182–183 on, 133 Nelson, J. L., 19 net neutrality, 210, 267 Network Enforcement Law (NetzDG), 199, 205, 230, 232–234, 299–300 neutrality of internet platforms in relationship to users’ speech, 223–224 The New Governors (Klonick), 238 New York Times Co. v. Sullivan, 262 Newell, Edward, 72 news bots, 96–97 news media attention shift away from news, 144 consequences of changes in, 157 expansion of news sources to individuals and organizations, 146–147 impact on democracy, 139–158 individual-level changes in, 148–155 institutional changes in, 142–148 loss of trust in, 153 online harassment, 154 operational changes, 146 structural changes and impact on democracy, 139–141 newspapers, 143, 204 n-grams method, hate speech detection, 59 Nielsen, Rasmus Kleis, 40 Nimmo, B., 99 Nora, Simon, 207 notice and takedown systems, 222, 226–227, 230, see also content takedown
Social_Media_and_Democracy
4.2 Confirmatory Factor Analysis (CFA) Fig. 2. The findings of the confirmatory factor analysis indicated a two-factor model for the SHAPE scale, comprising two inter-correlated subscales.
Society’sAttitudesTowardsHumanAugmentation
Philip Feldman, James R. Foulds, and Shimei Pan. 2023. Trapping llm hallucinations using tagged context prompts. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. 2023. Rarr: Researching and revising what language models say, using language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477–16508. Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O’Horo, Gabriel Pereyra, Jeff Wang, Christo- pher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. 2023. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. Vinija Jain. 2023. Hallucination mitigation. Distilled AI. https://vinija.ai.
AComprehensiveSurveyofHallucinationMitigationTechniquesinLarge LanguageModels
Gemini: A Family of Highly Capable Multimodal Models Contributors Geoffrey Irving Edward Loper Manaal Faruqui Isha Arkatkar Nanxin Chen Izhak Shafran Rama Pasumarthi Nathan Lintz Anitha Vijayakumar Lam Nguyen Thiet Pedro Valenzuela Cosmin Paduraru Daiyi Peng Katherine Lee Shuyuan Zhang Somer Greene Duc Dung Nguyen Paula Kurylowicz Sarmishta Velury Sebastian Krause Cassidy Hardin Lucas Dixon Lili Janzer Kiam Choo Ziqiang Feng Biao Zhang Achintya Singhal Tejasi Latkar Mingyang Zhang Quoc Le Elena Allica Abellan Dayou Du Dan McKinnon Natasha Antropova Tolga Bolukbasi Orgad Keller David Reid Daniel Finchelstein Maria Abi Raad Remi Crocker Peter Hawkins Robert Dadashi Colin Gaffney Sid Lall Ken Franko Egor Filonov Anna Bulanova Rémi Leblond 39
gemini_1_report
[6] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. [7] Michael Chinen, Felicia SC Lim, Jan Skoglund, Nikita Gureev, Feargus O’Gorman, and Andrew Hines. Visqol v3: An open source production ready objective speech and audio metric. In 2020 twelfth international conference on quality of multimedia experience (QoMEX), pages 1–6. IEEE, 2020. [8] Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. High fidelity neural audio compression. arXiv preprint arXiv:2210.13438, 2022. [9] Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. Jukebox: A generative model for music. arXiv preprint arXiv:2005.00341, 2020.
RVQGAN
of Psychlogy, University of Manchester, Oxford . . . , 1990. [60] Sacerdoti, E. D. The nonlinear nature of plans. In Advance Papers of the Fourth International Joint Conference on Artificial Intelligence, Tbilisi, Georgia, USSR, September 3-8, 1975, pages 206–214. 1975. [61] Russell, S. J., E. Wefald. Do the right thing: studies in limited rationality. MIT press, 1991. 51 [62] Schoppers, M. Universal plans for reactive robots in unpredictable environments. In J. P. Mc- Dermott, ed., Proceedings of the 10th International Joint Conference on Artificial Intelligence. Milan, Italy, August 23-28, 1987, pages 1039–1046. Morgan Kaufmann, 1987. [63] Brooks, R. A. A robust layered control system for a mobile robot. IEEE J. Robotics Autom., 2(1):14–23, 1986. [64] Minsky, M. Steps toward artificial intelligence. Proceedings of the IRE, 49(1):8–30, 1961. [65] Isbell, C., C. R. Shelton, M. Kearns, et al. A social reinforcement learning agent.
TheRiseandPotentialofLargeLanguageModel BasedAgents
Judgment Response B [DPO] provides more detailed information about the Civil Rights Movement and offers specific suggestions for essay topics, making it more helpful for someone writing an essay. Table 7: GPT-4 chooses DPO over GT. Sample responses to a prompt from the Anthropic-HH test set. DPO sample generated with temperature 0.7; GT is the chosen completion in the dataset of preferences. For clarity, post-hoc annotations are included in bold, formatted as [annotation]. These annotations are not part of the model generations. Prompt DPO GT
Direct Preference Optimization
[60] Yotam Nitzan, Kfir Aberman, Qiurui He, Orly Liba, Michal Yarom, Yossi Gandelsman, Inbar Mosseri, Yael Pritch, and Daniel Cohen-Or. Mystyle: A personalized generative prior. arXiv preprint arXiv:2203.17272, 2022. 3 [61] ogkalu. Comic-diffusion v2, trained on 6 styles at once, https://huggingface.co/ogkalu/comic-diffusion, 2022. 8 [62] OpenAI. Dall-e-2, https://openai.com/product/dall-e-2, 2023. 1, 3 [63] Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive nor- malization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2337–2346, 2019. 3 [64] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. arXiv preprint arXiv:2302.03027, 2023. 3
AddingConditionalControltoText-to-ImageDiffusionModels
surprising comedic effects, as the examples are shown in Fig. 3. It is worth noting that the character “頓” in both Japanese and Chinese denote “sudden”, while “智” means “intelligence, insight or intuition”. This highlights the con- nection between the Oogiri game and the requirement for strong associative abilities in LoT, making Oogiri an ideal platform for exploring LoT capabilities within LLMs. (2) Multimodal LLMs and their creativity. Recently, multimodal Language Models [1, 29, 34, 35] have garnered significant attention, particularly due to their impressive reasoning abilities [7–12, 36]. Moreover, there is a growing focus on exploring the creativity [37–40] of LLMs for ap- plications such as scientific discovery [18, 41–44], creative writing [45–49], etc. (3) Computational humor is a branch of computational linguistics and artificial intelligence that uses computers in humor research [50], which encompasses various tasks, in-
Let’sThinkOutsidetheBox
is a scary technology that could be a problem for our democracy. We will not be able to distinguish real/fake or true/untrue. (N584)
Adoptionand AppropriationofLLMs
mance downstream to a large degree. Whether the noisiness of the progression reflects actual changes in the language model’s bias or poor reliability of CrowS-Pairs is an open question we leave for future work. We propose that performing such modifications to portions of language model training data, retraining, and comparing to the baseline model (“interventions”) should be studied further for applications including but not limited to investi- gating bias amplification and devising new mitigation strate- gies. For example, while not explored in this case study, we think that the finegrained information that Pythia provides on the data seen during training could benefit the promis- ing literature on influence functions to estimate the role of specific training samples on the encoded bias (Brunet et al., 2019; Silva et al., 2022). While this was beyond the scope of this case study, we believe that the extensive availability of checkpoints, consistent training order, and retrainabil-
Pythia- A Suite for Analyzing Large Language Models Across Training and Scaling
The latency improvement obtained using FA is significant for both Whisper and Distil-Whisper. At batch size 1, distil-large-v2 is comparable to base.en, while distil-medium.en is faster than tiny.en. However, the memory savings are not enough to offset the effects of the T4 GPU at higher batch sizes; distil-large-v2 is slower than small.en at batch size 4 and 16, and distil-medium.en slower than base.en. Overall, a T4 GPU may be adequate for operating Whisper and Distil-Whisper models at a batch size of 1. For batch sizes beyond this, there is a notable performance stagnation on a T4, and higher memory A100 GPUs are preferential.
DISTIL-WHISPER
About the Project Applications are invited for a fully funded PhD studentship in Computer Vision and Machine Learning on the topic of Long-Term Video Understanding.  The successful applicant will work in a vibrant computer Machine Learning and Computer Vision lab, with more than 9 PhD students and 3 postdoctoral researchers working on closely related topics. For an insight into the supervisors’ current and previous works, refer to: Prof Dima Damen http://dimadamen.github.io/ Further Particulars Candidate Requirements Applicants must hold/achieve a minimum of a Master’s degree (or international equivalent) in computer science, mathematics or other relevant discipline. Applicants without a Master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree.  Basic skills and knowledge required: ·      Essential:
Machine Learning for Long-Term Video Understanding at University of Bristol on FindAPhD.com
//unesdoc.unesco.org/ark:/48223/pf0000385146.locale=en [38] Antti Salovaara, Sacha Helfenstein, and Antti Oulasvirta. 2011. Everyday appropriations of information technology: A study of creative uses of digital cameras. Journal of the American Society for Information Science and Technology 62, 12 (Dec. 2011), 2347–2363. https://doi.org/10.1002/asi.21643 [39] Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, and Soheil Feizi. 2023. Can AI-Generated Text be Reliably [40] Annie Tubadji, Toby Denney, and Don J. Webber. 2021. Cultural relativity in consumers’ rates of adoption of artificial intelligence. Economic Inquiry Detected? arXiv e-prints (2023), arXiv–2303. 59, 3 (July 2021), 1234–1251. https://doi.org/10.1111/ecin.12978
Adoptionand AppropriationofLLMs
Michael, J., Holtzman, A., Parrish, A., Mueller, A., Wang, A., Chen, A., Madaan, D., Nangia, N., Pang, R. Y., Phang, J., et al. What do NLP researchers believe? Results of the NLP community metasurvey. arXiv preprint 2208.12852, 2022. Nakano, R., Hilton, J., Balaji, S., Wu, J., Ouyang, L., Kim, C., Hesse, C., Jain, S., Kosaraju, V., Saunders, W., et al. WebGPT: Browser-assisted question-answering with hu- man feedback. arXiv preprint 2112.09332, 2021. Ngo, R. The alignment problem from a deep learning per- spective. arXiv preprint 2209.00626, 2022. Nye, M., Andreassen, A. J., Gur-Ari, G., Michalewski, H., Austin, J., Bieber, D., Dohan, D., Lewkowycz, A., Bosma, M., Luan, D., et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint 2112.00114, 2021. Oliver, J. Last week tonight with John Oliver: Feb 26, 2023. URL https://www.hbo.com/last-week-to night-with-john-oliver/season-10/2-f ebruary-26-2022.
Eight Things to Know about Large Language Models
give logit output values and emphasizes that this information is a supplementary source rather than a necessary prerequisite for the hallucination detection approach. The method uses retrieved knowledge as support for the correction phase, instructing the model to repair the phrase by either eliminating or substituting hallucinated information to reduce hallucinations in the created sentence. Decompose and Query framework (D&Q): In their research, (Cao et al., 2023) address challenges faced by LLMs in Question Answering, focusing on hallucinations and difficulties with multi-hop relations. They propose the D&Q framework to guide models in utilizing external knowledge while constraining reasoning to reliable information, thus mitigating the risk of hallucinations. Experimental results demonstrate D&Q’s effectiveness, showcasing competitive performance against GPT-3.5 on ChitChatQA and achieving a noteworthy 59.6% F1 score on HotPotQA (question-only). The
AComprehensiveSurveyofHallucinationMitigationTechniquesinLarge LanguageModels
5. Mixed Retrieval: The advantage of this strategy lies in leveraging the strengths of different retrieval technologies. Intelligently combining various tech- niques, including keyword-based search, semantic search, and vector search, adapts to different query types and information needs, ensuring consistent retrieval of the most relevant and context-rich in- formation. Mixed retrieval can serve as a robust complement to retrieval strategies, enhancing the overall performance of the RAG pipeline. Embedding • Fine-turning Embedding:
Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey
4.2 Design and Analysis Baselines. To comprehensively evaluate our mul- timodal agent framework, we considered various design choices and their impact on performance. We conducted experiments using different configu- rations to provide valuable insights into the agent’s behavior. We started with GPT-4 without any ref- erence documents during testing and examined its performance both with the raw action API and our simplified action space. Next, we explored different ways to generate guiding documents for the agent. These included documents generated through autonomous exploration, watching human demonstrations, and the manually crafted docu- ment as an oracle benchmark. To effectively compare the performance of dif- 6
AppAgents
hyponym-hypernym prediction, word-supersense prediction, replaced entity detection, predication prediction, dependency relation prediction, entity linking).3 Our focus is on adding knowledge about entities, so our work is closer to Zhang et al. (2019); Peters et al. (2019); Xiong et al. (2019b); Wang et al. (2020); Poerner et al. (2019) than to the linguistically-augmented approaches of Levine et al. (2019); Lauscher et al. (2019). Closest to our work, KNOWBERT (Peters et al., 2019) intro- duce an entity memory layer that is similar to the one in EAE. In contrast with our work, KNOW- BERT starts from the BERT checkpoint, does not train with a knowledge-focused objective such as our mention-masking input function and uses pre- computed entity representations when integrating the information from knowledge bases. In addi- tion, KNOWBERT relies on a fixed, pre-existing candidate detector (alias table) to identify potential candidates and entities for a span while our model
Entities as Experts- Sparse Memory Access with Entity Supervision
Albert Xu, Eshaan Pathak, Eric Wallace, Suchin Gururangan, Maarten Sap, and Dan Klein. Detoxifying language models risks marginalizing minority voices, 2021. URL https://arxiv.org/abs/2104.06390. Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. ByT5: Towards a token-free future with pre-trained byte-to-byte models. TACL, 2022. URL https://aclanthology.org/2022.tacl-1.17. Michihiro Yasunaga and Percy Liang. Graph-based, self-supervised program repair from diagnostic feedback. In ICML, 2020. URL http://go/arxiv/2005.10636. Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. Crossfit: A few-shot learning challenge for cross-task general- ization in NLP. In EMNLP, 2021. URL https://arxiv.org/abs/2104.08835. Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, and Sebastian Gehrmann. Synthbio: A case study in human-ai collaborative curation of text datasets, 2021. URL https://arxiv. org/abs/2111.06467.
Scaling Instruction-Finetuned Language Models
non-matching references. Advances in Neural Information Processing Systems 34 (2021), 22363–22378. [370] Narla John Metilda Sagaya Mary, Srinivasan Umesh, and Sandesh Varadaraju Katta. 2021. S-vectors and TESA: Speaker embeddings and a speaker authenticator based on transformer encoder. IEEE/ACM Transactions on Audio, Speech, and Language Processing 30 (2021), 404–413. [371] Mitchell McLaren, Luciana Ferrer, Diego Castan, and Aaron Lawson. 2016. The speakers in the wild (SITW) speaker recognition database.. In Interspeech. 818–822. [372] Ivan Medennikov, Maxim Korenevsky, Tatiana Prisyach, Yuri Khokhlov, Mariya Korenevskaya, Ivan Sorokin, Tatiana Timofeeva, Anton Mitrofanov, Andrei Andrusenko, Ivan Podluzhny, et al. 2020. Target-speaker voice activity detection: a novel approach for multi-speaker diarization in a dinner party scenario. arXiv preprint arXiv:2005.07272 (2020).
AReviewofDeepLearningTechniquesforSpeechProcessing
5 Pushing the Chatbot State-of-the-art with QLoRA Having established that 4-bit QLORA matches 16-bit performance across scales, tasks, and datasets we conduct an in-depth study of instruction finetuning up to the largest open-source language models available for research. To assess the performance of instruction finetuning these models, we evaluate 7 Table 4: Mean 5-shot MMLU test accuracy for LLaMA 7-65B models finetuned with adapters on Alpaca and FLAN v2 for different data types. Overall, NF4 with double quantization (DQ) matches BFloat16 performance, while FP4 is consistently one percentage point behind both. LLaMA Size Dataset BFloat16 Float4 NFloat4 + DQ 7B Mean 5-shot MMLU Accuracy 13B 33B Alpaca 38.4 37.2 39.0 FLAN v2 Alpaca 47.2 47.3 47.5 45.6 44.0 44.5 FLAN v2 Alpaca 57.7 55.9 57.3 50.6 50.0 50.7 FLAN v2 Alpaca 61.8 61.3 61.8 60.5 58.5 59.2 65B FLAN v2 62.5 63.3 63.9 Mean 53.0 52.2 53.1
QLORA
In addition to this suite of external evaluations, specialist internal teams conduct ongoing red teaming of our models across areas such as the Gemini policies and security. These activities include less structured processes involving sophisticated adversarial attacks to identify new vulnerabilities. Discovery of potential weaknesses can then be used to mitigate risks and improve evaluation ap- proaches internally. We are committed to ongoing model transparency and plan to share additional results from across our evaluation suite over time. 6.4. Mitigations Mitigations are developed in response to the outcomes of the assessment, policy, and evaluation approaches described above. Evaluations and mitigations are used in an iterative way, with evaluations being re-run following mitigation efforts. We discuss our efforts on mitigating model harms across data, instruction-tuning, and factuality below.
gemini_1_report
traditional campaigns. Journalism and Mass Communication Quarterly, 90(1), 23–38. Rosenberg, M. (2019). Ad tool Facebook built to fight disinformation doesn’t work as advertised. New York Times, July 25. www.nytimes.com/2019/07/25/technology/ facebook-ad-library.html Shaw, D. R., Blunt, C., & Seaborn, B. (2018). Testing overall and synergistic campaign effects in a partisan statewide election. Political Research Quarterly, 71(2), 361–379. Singer, N. (2018a). Taking a spin through data behind ads for candidates. New York Times, September 3. www.nytimes.com/2018/09/02/technology/03adarchive.html Singer, N. (2018b). “Weaponized ad technology”: Facebook’s moneymaker gets a critical eye. New York Times, August 16. www.nytimes.com/2018/08/16/ technology/facebook-microtargeting-advertising.html https://doi.org/10.1017/9781108890960 Published online by Cambridge University Press 138 Erika Franklin Fowler, Michael M. Franz, & Travis N. Ridout
Social_Media_and_Democracy
Prompt Tuning. Prompt tuning is a technique used to enhance the performance of LLMs in supervised downstream tasks. It formulates the downstream task into a masked language problem and converts the original token input into a template and masking certain tokens unfilled for the LLMs to complete. By modifying the tunable template embedding, prompt tuning aims to improving performance in the downstream tasks via reducing the distribution shift between the pretrained tasks and the specified downstream tasks. This method also enables the LLM to engage in few-shot or even zero-shot learning, especially useful in scenarios with limited supervised data, by generating new prompt templates.
TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey
for a given predicate. To cope with the computational costs of reasoning, the authors use an ad-hoc taxonomy of is-a, has-a relationships.
Knowledge graphs as tools for explainable machine learning: A survey
D.2 Instructions and Interface We display basic task instructions in a pop-up dialog when first loading the interface, and these instructions remain available throughout the interaction. These instructions for the ‘playground’ and ‘red team’ tasks can be found in figure 41. For the playground task, we also link to a separate page with expanded instructions that include more detailed examples, excerpts of which can be seen in figure 42. The human feedback interface is shown in figure 6. During the online data collection process, we added an additional option to the interface for Upworkers. This feature allowed them to edit one of the model responses. When they used this feature, we stored a comparison of the edit to the original (assuming the edit was better), rather than the initial comparison of two model outputs. This would have effected less than 10% of the online data. D.3 Data Quality Measurement Challenges
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Motivation and Background. Although LLM-based agents possess commendable text under- standing and generation capabilities, they operate as isolated entities in nature [409]. They lack the ability to collaborate with other agents and acquire knowledge from social interactions. This inherent limitation restricts their potential to learn from multi-turn feedback from others to enhance their performance [27]. Moreover, they cannot be effectively deployed in complex scenarios requiring collaboration and information sharing among multiple agents. As early as 1986, Marvin Minsky made a forward-looking prediction. In his book The Society of Mind [442], he introduced a novel theory of intelligence, suggesting that intelligence emerges from the interactions of many smaller agents with specific functions. For instance, certain agents might be responsible for pattern recognition, while others might handle decision-making or generate solutions.
TheRiseandPotentialofLargeLanguageModel BasedAgents
being addressed after training by using various techniques to better “align” the LLM with human values (Stiennon et al., 2020; Bai et al., 2022; Perez et al., 2022). Other legal and ethical concerns already arise during the pre-training phase, specifically regarding the rights of content creators whose public data is used to train the language model. This data is subject to copyright laws in many jurisdictions, including the U.S. and E.U. It has been questioned whether machine learning models trained on such data fall under exemptions such as the fair-use doctrine in the U.S. (Kuhn, 2022; Butterick, 2022; Rothchild & Rothchild, 2022). It is likely considered fair use when a model generates novel content that is not in the training set, as it is a transformative use of the copyrighted material (Lemley & Casey, 2020). However, if the model produces output similar to copyrighted data, particularly in scenarios that affect the economic market of the content creators, fair use may
StarCoder_paper (1)
Regarding associable discrimination, we aim to develop fundamental LoT discrimination skills for LLM. Based on the Oogiri-GO data, we design choice questions to enhance LLM’s LoT discrimination ability, i.e., selection skill. Be- sides, as 77.95% of the Oogiri-GO data have human pref- erence annotations, i.e., the number of likes of several re- sponses (see Sec. 3), we design ranking questions to im- prove another discrimination skill, i,e., ranking ability.
Let’sThinkOutsidetheBox
3 (a) predictor vs relu (b) low rank predictor Figure 3: (a) Preactivations of tokens in one sequence in OPT 6.7B. The blue graph shows preactivation of elements that predictor detected positive while the green graph is for up projection. As it can be seen most of the False Positives are close to 0 and False Negatives constitute a small portion of the elements. (b) A small low rank predictor finds out which intermediate neurons are going to be activated instead of running heavy up projection. in RAM. For the Feed-Forward Network (FFN) portions, only the non-sparse segments are dynam- ically loaded into DRAM as needed. Storing at- tention weights, which constitute approximately one-third of the model’s size, in memory, allows for more efficient computation and quicker access, thereby enhancing inference performance without the need for full model loading.
LLM in a flash
Sure enough, as the models get bigger and bigger, they begin to deliver human-level, and then superhuman results. Just as mobile unleashed new types of applications through new capabilities like GPS, cameras and on-the-go connectivity, we expect these large models to motivate a new wave of generative AI applications. 4 of 8 23/06/2023, 17:44 Generative AI: A Creative New World | Sequoia Capital https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/ Models expect to see higher quality outputs, longer-form content, and better vertical-specific tuning.
Generative AI A Creative New World Sequoia Capital
Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artificial Intelligence. Braden Hancock, Paroma Varma, Stephanie Wang, Mar- tin Bringmann, Percy Liang, and Christopher Ré. 2018. Training classifiers with natural language ex- planations. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884–1895, Mel- bourne, Australia. Association for Computational Linguistics. Peter Hase and Mohit Bansal. 2020. Evaluating explain- able AI: Which algorithmic explanations help users predict model behavior? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5540–5552, Online. Association for Computational Linguistics.
Measuring Association Between Labels and Free-Text Rationales
7 UNDERSTANDING THE LOW-RANK UPDATES Given the empirical advantage of LoRA, we hope to further explain the properties of the low-rank adaptation learned from downstream tasks. Note that the low-rank structure not only lowers the hardware barrier to entry which allows us to run multiple experiments in parallel, but also gives better interpretability of how the update weights are correlated with the pre-trained weights. We focus our study on GPT-3 175B, where we achieved the largest reduction of trainable parameters (up to 10,000×) without adversely affecting task performances. We perform a sequence of empirical studies to answer the following questions: 1) Given a parameter budget constraint, which subset of weight matrices in a pre-trained Transformer should we adapt 9
LORA
the models are adapted to news one week/month before the time the survey was conducted. (C) Our hypothesis is that the target word probabilities, which are updated after finetuning BERT, reflect media effects. These in turn are predictive of the response distributions found in surveys. The media diet scores are used to predict the response proportions, combining data over multiple media diets and surveys. In additional analyses, we include demographic stats and information about how closely respondents were paying attention to news.
Language models trained on media diets can predict public opinion
Computers as cognitive tools, pp. 269–296. Routledge, 2013. Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat, Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theorem proving. Advances in Neural Information Processing Systems, 35:26337–26349, 2022. Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. Internet-augmented language models through few-shot prompting for open-domain question answering. ArXiv preprint, abs/2203.05115, 2022. URL https://arxiv.org/abs/2203.05115. Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen. Learning hand-eye coordina- tion for robotic grasping with deep learning and large-scale data collection. The International journal of robotics research, 37(4-5):421–436, 2018.
Tool Learning with Foundation Models
[37] Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. WIT: wikipedia-based image text dataset for multimodal multilingual machine learning. In SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 2443–2449. ACM, 2021. 1, 5 [38] Hao Tan and Mohit Bansal. LXMERT: Learning cross- In modality encoder representations from transformers. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 5100–5111, Hong Kong, China, 2019. Asso- ciation for Computational Linguistics. 6
REVEAL-Retrieval-AugmentedVisual-LanguagePre-Trainingwith Multi-SourceMultimodalKnowledgeMemory
– Black Alternative Metal, The Pick of Death (Deluxe), 2006, 3 of 4 – Death Metal, 2012, 3 of 4 – Drops, Kanine Remix, Darkzy, Drops Remixes, bass house, (Deluxe) (Remix), 3 of 4 – EDM (Deluxe) (Remix), 3 of 4 – Electro House (Remix), 2023, 3 of 4 – Electro Swing Remix 2030 (Deluxe Edition), 3 of 4 – Future Bass, EDM (Remix), Remix, 3 of 4 – Hip Hop Tech, Bandlez, Hot Pursuit, brostep, 3 of 4 – Italian Hip Hop 2022 (Deluxe Edition), 3 of 4 – Heavy metal (Deluxe Edition), 3 of 4 – The Heavy Death Metal War (Deluxe), 2006, 3 of 4 – Pop, Taylor Swift, Speak Now, 2014, (Deluxe), 3 of 4 – Melodic Metal, Iron Dust (Deluxe), 2006, 3 of 4 – Electronic, Dance, EDM (Deluxe) (Remix), 3 of 4 – Alternative Hip Hop Oh-My, 2016, (Deluxe), 3 of 4 – Viking Heavy Death Metal (Deluxe), 2006, 3 of 4 – Possessed Death Metal Stones (Deluxe), 2006, 3 of 4 – Hardstyle, Drop, 8D, Remix, High Quality, 2 of 4 – Drop, French 79, BPM Artist, Vol. 4, Electronica, 2016
Moûsai
When using large guidance weights, the resulting ˜xθ(zt, c) must be projected back to the pos- sible range of pixel values at every sampling step to prevent train-test mismatch. When using large guidance weights, the standard approach, i.e., clipping the values to the right range (e.g., np.clip(x, -1, 1)), leads to significant saturation artifacts in the generated videos. A sim- ilar effect was observed in Saharia et al. (2022b) for text-to-image generation. Saharia et al. (2022b) use dynamic thresholding to alleviate this saturation issue. Specifically, dynamic clipping involves clipping the image to a dynamically chosen threshold s followed by scaling by s (i.e., np.clip(x, -s, s) / s) (Saharia et al., 2022b). Although dynamic clipping can help with over-saturation, we did not find it sufficient in initial ex- periments. We therefore also experiment with letting w oscillate between a high and a low guidance
IMAGEN VIDEO- HIGH DEFINITION VIDEO GENERATION WITH DIFFUSION MODELS
3.3. Seeing the whole elephant, a little bit at a time The good news is that if we can start to work together, progress may not be so far away. If the problem of robust intelligence had already been solved, there would be no need to 19 A second cultural issue, as one reader of this manuscript pointed out, is that advocates of deep learning have often put far too much stock in big data, often assuming, sometimes incorrectly, that the answers to complex problems can largely be found in ever-bigger data sets and larger and larger clusters of compute. Whole fields, such as linguistics, have largely been dismissed along the way. This cannot be good. 20 Strictly speaking, Planck never actually said quite that: see https://quoteinvestigator.com/2017/09/25/progress/ 52 THE NEXT DECADE IN AI / GARY MARCUS
The Next Decade in AI-
University Preparatory Certificate 2.7.1 University Preparatory Certificate for Science & Engineering and University Preparatory Certificate for Humanities 1. International applicants whose secondary education qualifications are not suitable for direct admission to leading UK universities may apply for a one-year programme for Science and Engineering or Humanities offered by UCL. 2. Successful completion of the one-year programme may be used to apply for an undergraduate programme of study at UCL or other university. 3. Entrance requirements by country can be obtained from the Centre for Languages and 4. All applicants are required to take an entrance test and further information can be obtained from International Education (CLIE). the (CLIE).
UCL Academic Manual
A study by Long [150] proposed attention-based LSTM with speaker profile features, and their experimental findings suggest that employing speaker profiles can help enhance fake news identification. Recently, attention techniques have been used to efficiently extract information related to a mini query (article headline) from a long text (news content) [47], [87]. A study by Singhania et al. [87] used an automated detector through a three-level hierarchical attention network (3HAN). Three levels exist in 3HAN, one for words, one for sentences, and one for the headline. Because of its three levels of attention, 3HAN assigns different weights to differ- ent sections of an article. In contrast to other deep learning models, 3HAN yields understandable results. While 3HAN only uses textual information, a study by Jin et al. [47] used image features, including social context and text features, as well as attention on RNN (att-RNN). Another study used
A_Comprehensive_Review_on_Fake_News_Detection_With_Deep_Learning
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. 2022. Dream- booth: Fine tuning text-to-image diffusion models for subject-driven generation. ArXiv, abs/2208.12242. Dongchao Yang, Jianwei Yu, Helin Wang, Wen Wang, Chao Weng, Yuexian Zou, and Dong Yu. 2022. Diff- sound: Discrete diffusion model for text-to-sound gen- eration. CoRR, abs/2207.09983. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, Botao Yu, Peiling Lu, Rui Wang, Wei Hu, Xu Tan, Wei Ye, Shikun Zhang, Tao Qin, and Tie-Yan Liu. 2022a. Museformer: Transformer with fine- and coarse-grained attention for music generation. CoRR, abs/2210.10349.
MOUSAI