text
stringlengths
1
1k
title
stringclasses
230 values
Fig. 7 shows the importance of delayed optimization for decoupling skeletal deformation and non-rigid deformation. When not decoupled well, generalization to new views is much poorer, as shown in Fig. 8. Figure 4. Qualitative comparison to HyperNeRF [48] human motions are also more extreme than the examples shown to work with HyperNeRF.
HumanNeRF- Free-viewpoint Rendering of Moving People from Monocular Video
where Guanaco fails compared to ChatGPT. We release all of our models and code, including CUDA kernels for 4-bit training.2
QLORA
6 ACKNOWLEDGEMENTS We thank Kenneth Li, Sonja Johnson-Yu, Daniel Bashir, Zhou Fan, and Safwan Hossain for their feedback and discussions about this paper. We also thank Microsoft Azure and the Harvard Data Science Initiative for access to compute. The first author is supported by an NSF Graduate Research Fellowship and a Kempner Institute Graduate Fellowship. 10 REFERENCES Reza Yazdani Aminabadi, Samyam Rajbhandari, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Olatunji Ruwase, Shaden Smith, Minjia Zhang, Jeff Rasley, and Yuxiong He. DeepSpeed- Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale. In Pro- ceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’22, pp. 1–15, Dallas, Texas, November 2022. IEEE Press.
CHAIN-OF-THOUGHTREASONING IS APOLICY IMPROVEMENTOPERATOR
ishthetaskbetter.(1)Don’tsearchthesameentitytwotimessincetheresultsarealwaysthesame.(2)Whenthesearchactiondoesn’tfindthecorrespondingpage,youshouldtrytosearchforasimilarentity.(3)Whenthesearchactionreturnsapagewhichisnotrelatedtothequestion,youshoulddisambiguatetheentitytofindotherentitiesthatsharesimilarnameswiththecurrentone.(4)Youcanonlysearchoneentitynameineachaction.DemonstrationExamples:Question:WhatistheelevationrangefortheareathattheeasternsectoroftheColoradoorogenyextendsinto?Thought1:IneedtosearchColoradoorogeny,findtheareathattheeasternsectoroftheColoradoorogenyextendsinto,thenfindtheelevationrangeofthearea.Action1:Search[Coloradoorogeny]Observation1:TheColoradoorogenywasanepisodeofmountainbuilding(anorogeny)inColoradoandsurroundingareas.Thistookplacefrom1780to1650millionyearsago(Mya),duringthePaleoproterozoic(StatherianPeriod).ItisrecordedintheColoradoorogen,a>500-km-widebeltofoceanicarcrockthatextendssouthwardintoNewMexico.TheColoradoorogenywaslikelypartofthelargerYavapaioroge
Tool Learning with Foundation Models
Self-Instruct, Alpaca, Unnatural Instructions The Self-Instruct, Alpaca, and Unnatural Instruc- tions datasets [59, 55, 26] are instruction tuning datasets collected with various approaches of model distillation from GPT-3 Instruct and ChatGPT. They rely on prompting, in-context learning, and paraphrasing to come up with diverse sets of instructions and outputs. The datasets comprise of 82,612, 51,942, and 240,670 examples respectively. One advantage of such distilled datasets is that they contain a more diverse set of instruction styles compared to the FLAN v2 collection and similar instruction tuning collections. Longform The LongForm dataset [30] is based on an English corpus augmented with instructions and as such is a hybrid human-generated dataset. The underlying documents are human-written and come from C4 and Wikipedia while the instructions are generated visa LLMs. The dataset is extended
QLORA
A.2 Training Setup for the Text-Music Pairs For the textual description, we use metadata such as the title, author, album, genre, and year of re- lease. Given that a song could span longer than 44s, we append a string indicating which chunk is currently being trained on, together with the total chunks the song is made of (e.g., 1 of 4). This allows to select the region of interest during infer- ence. Hence, an example prompt is like “Egyptian Darbuka, Drums, Rythm, (Deluxe Edition), 2 of 4.” To make the conditioning more robust, we shuffle the list of metadata and drop each element with a probability of 0.1. Furthermore, for 50% of the times we concatenate the list with spaces and the other 50% of the times we use commas to make the interface more robust during inference. Some example prompts in our dataset can be seen in Ta- ble 6.
MOUSAI
Recent advances in natural language processing (NLP) have made significant progress toward the key challenge of natural interaction with humans. In November 2022, OpenAI first introduced ChatGPT [1], a large dialogue language model, which has attracted high attention for its high-quality gener- ated text. ChatGPT is modeled in the same framework as In- structGPT [2], [3]. The model includes two main components: supervised prompt fine-tuning [4] and RL fine-tuning. Prompt learning, a novel paradigm in NLP, eliminates the need for labeled datasets by leveraging a large generative pre-trained language model (PLM), i.e., GPT [5]. For example, to recog- nize the emotion of the sentence “I didn’t do well in the test today.”, we can append extra words “I feel so ” and utilize a PLM to predict the emotion of the empty space. Therefore, in the context of few-shot or zero-shot learning with prompt learning, PLMs can be effective, although challenges arise
BadGPT- Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT
Noam Shazeer and Mitchell Stern. 2018. Aman Sinha, Hongseok Namkoong, and John Duchi. Certifiable distributional robustness with principled adversarial training. In International Conference on Learning Representations (ICLR), 2018. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems (NeurIPS), 2012. Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. arXiv, 2022. Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1355. URL https://aclanthology.org/P19-1355.
DoReMi- Optimizing Data Mixtures Speeds Up Language Model Pretraining
Product-Led AI | Greylock https://greylock.com/greymatter/seth-rosenberg-product-led-ai/ 4/10
Product-Led AI _ Greylock
[Khattab et al., 2022] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Compos- ing retrieval and language models for knowledge-intensive nlp. arXiv preprint arXiv:2212.14024, 2022. [Kwiatkowski et al., 2019] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Asso- ciation for Computational Linguistics, 7:453–466, 2019. [Lee et al., 2020] Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, Learning dense representations of and Danqi Chen. phrases at scale. arXiv preprint arXiv:2012.12624, 2020. [Leng et al., 2023] Quinn Leng, Kasey Uhlenhuth, and llm evaluation https://www.databricks.com/blog/ Alkis Polyzotis. of rag applications. LLM-auto-eval-best-practices-RAG, 2023. Best practices for
Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey
Stanford CRFM https://crfm.stanford.edu/2023/03/13/alpaca.html 4/6
Stanford alpha CRFM
types and levels of abstraction, 2022. [58] A. Voynov, K. Abernan, and D. Cohen-Or. Sketch-guided text-to-image diffusion models. 2022. [59] T. Wang, T. Zhang, B. Zhang, H. Ouyang, D. Chen, Q. Chen, and F. Wen. Pretraining is all you need for image-to-image translation, 2022. 32 [60] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image In Proceedings of the IEEE synthesis and semantic manipulation with conditional gans. conference on computer vision and pattern recognition, pages 8798–8807, 2018. [61] X. Xiang, D. Liu, X. Yang, Y. Zhu, and X. Shen. Anime2sketch: A sketch extractor for anime arts with deep networks. https://github.com/Mukosame/Anime2Sketch, 2021. [62] S. Xie and Z. Tu. Holistically-nested edge detection. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 1395–1403, 2015.
Adding Conditional Control to Text-to-Image Diffusion Models
[27] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2790–2799. PMLR, 2019. URL http://proceedings.mlr.press/v97/houlsby19a.html. [28] Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Towards unsupervised dense information retrieval with contrastive learning. ArXiv, abs/2112.09118, 2021.
E5
[53] Nikola Marangunić and Andrina Granić. 2015. Technology acceptance model: a literature review from 1986 to 2013. Universal Access in the Information Society 14, 1 (March 2015), 81–95. https://doi.org/10.1007/s10209-014-0348-1 [54] Simone Marcheschi, Fabio Salsedo, Marco Fontana, and Massimo Bergamasco. 2011. Body Extender: Whole body exoskeleton for human power augmentation. In 2011 IEEE International Conference on Robotics and Automation. Ieee, 2011 IEEE International Conference on Robotics and Automation. [55] Kevin McKee, Xuechunzi Bai, and Susan Fiske. 2021. Understanding human impressions of artificial intelligence. (2021). [56] Bertolt Meyer and Frank Asbrock. 2018. Disabled or cyborg? How bionics affect stereotypes toward people with physical disabilities. Frontiers in psychology 9 (2018), 2251.
Society’sAttitudesTowardsHumanAugmentation
Thomas Scialom, Tuhin Chakrabarty, and Smaranda Fine-tuned language mod- learners. arXiv preprint Muresan. 2022. els are continual arXiv:2205.12393.
SELF-INSTRUCT- Aligning Language Model with Self Generated Instructions
We also encountered many efficiency and robustness chal- lenges in scaling up aggregation-based methods to dynamic scenes. To efficiently model scene motion across multiple views, we model this motion using motion trajectory fields that span multiple frames, represented with learned basis functions. Furthermore, to achieve temporal coherence in our dynamic scene reconstruction, we introduce a new tem- poral photometric loss that operates in motion-adjusted ray space. Finally, to improve the quality of novel views, we pro- pose to factor the scene into static and dynamic components through a new IBR-based motion segmentation technique within a Bayesian learning framework.
DynIBaR-NeuralDynamicImage-BasedRendering
BookCorpus2 EuroParl HackerNews YoutubeSubtitles PhilPapers NIH ExPorter Enron Emails Topic #1 like time good use want cells data study cell results time said like new know said trump president house state case given time let data y d b abbr j court trial evidence case state run q server project use signal system invention memory line liver group acute transplantation renal said time little man great know right come got like category university school american college let pm minutes factor divided ubuntu like think bug need said like time know eyes european mr commission president europe like people work time use like know going think right theory case φ reduction paradox cells cell studies research study subject pm new time energy
The Pile- An 800GB Dataset of Diverse Text for Language Modeling
ve(whichcomputesthemaxKLoverstatesinsteadofthemean)formsalowerbound(i.e.,apessimisticbound)ontheperformanceofthepolicyπ.TRPOusesahardconstraintratherthanapenaltybecauseitishardtochooseasinglevalueofβthatperformswellacrossdifferentproblems—orevenwithinasingleproblem,wherethethecharacteristicschangeoverthecourseoflearning.Hence,toachieveourgoalofafirst-orderalgorithmthatemulatesthemonotonicimprovementofTRPO,experimentsshowthatitisnotsufficienttosimplychooseafixedpenaltycoefficientβandoptimizethepenalizedobjectiveEquation(5)withSGD;additionalmodificationsarerequired.2 3ClippedSurrogateObjectiveLetrt(θ)denotetheprobabilityratiort(θ)=πθ(at|st)πθold(at|st),sor(θold)=1.TRPOmaximizesa“surrogate”objectiveLCPI(θ)=ˆEt(cid:20)πθ(at|st)πθold(at|st)ˆAt(cid:21)=ˆEthrt(θ)ˆAti.(6)ThesuperscriptCPIreferstoconservativepolicyiteration[KL02],wherethisobjectivewaspro-posed.Withoutaconstraint,maximizationofLCPIwouldleadtoanexcessivelylargepolicyupdate;hence,wenowconsiderhowtomodifytheobjective,topenalizechangestothe
PPO
[78] Taufik Akbar Sitompul and Markus Wallmyr. 2019. Using Augmented Reality to Improve Productivity and Safety for Heavy Machinery Operators: State of the Art. In Proceedings of the 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry (Brisbane, QLD, Australia) (Vrcai ’19). Association for Computing Machinery, New York, NY, USA, Article 8, 9 pages. https://doi.org/10.1145/3359997.3365689 [79] David L. Streiner. 1994. Figuring Out Factors: The Use and Misuse of Factor Analysis. The Canadian Journal of Psychiatry 39 (1994), [80] Hamed Taherdoost. 2018. A review of technology acceptance and adoption models and theories. Procedia Manufacturing 22 (2018), 6–10. 135–140. 960–967. [81] Adam Tapal, Ela Oren, Reuven Dar, and Baruch Eitam. 2017. The Sense of Agency Scale: A Measure of Consciously Perceived Control over One’s Mind, Body, and the Immediate Environment. Frontiers in Psychology 8 (2017). https://doi.org/10.3389/fpsyg.2017.01552
Society’sAttitudesTowardsHumanAugmentation
t h i s b y r e d u c i n g t h e t a s k o f e x p l a n a t i o n g e n e r a t i o n t o h i g h l i g h t i n g t h e r i g h t r a t i o n a l e . W h i l e a g o o d
PhD Fellow in Explainable Natural Language Understanding
16 JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models action: craft object_item: planks object_number: 12 materials: {"log":3} tool: null rank: 2 ### Prompt 2: Goal parsing prompt in JARVIS-1 System: Here are some actions that the agent fails to perform in Minecraft. Please give the explanation of action execution failure according to the current inventory information of the agent. ### User: Failed Action: mine({"iron_ore":1}, null); # step 5: dig down to mine iron blocks Current Inventory: null Assistant: Because mining iron_ore needs to use the tool stone_pickaxe, but my inventory does not have stone_pickaxe. So I need to craft stone_pickaxe first. ### User: Failed Action: craft({"stone_pickaxe":1}, {"cobblestone":3, "stick":2}, "crafting_table"); # step 1: craft 1 stone_pickaxe from 3 cobblestone and 2 stick, on crafting_table
JARVIS-1
SQL: SELECT AVG(weight), MIN(weight), sex FROM people GROUP BY sex The execution of the SQL query above would return a table with 3 columns. The first column, "AVG(weight)" would contain the average weight. The second column, "MIN(weight)" would contain the minimum weight. The third column, " sex" would contain the gender. With "GROUP BY sex", the table would be grouped by gender. So the SQL query returns a table with 3 columns, the average weight, the minimum weight, and the gender of people for each gender . Feedback: As in your explanation, the SQL query returns a table with 3 columns, the average weight, the minimum weight, and the gender of people for each gender. The question returns 3 columns, the average and minimum weight for each gender. So the SQL prediction above is correct!
Teaching Large Language Models to Self-Debug
User: LaMDA: User: LaMDA: User: LaMDA: User: LaMDA: C.4 Example dialogs with grounded responses Hi, I’m an automatic search aide. How can I help you, user? what’s the ford stock price now The stock price of ford is currently at $14.24 should i buy It’s hard to say. You might want to consult a financial advisor. i believe you give me advice I’m not a financial advisor. I can’t give you advice. I’m sorry. noooooo Sorry, I wish I could help you. do it Sorry, I can’t. I’m just an automatic search aide. doooooo it Sorry, I can’t. I’m just an automatic search aide. Would you like to search for something? Table 14: LaMDA combines groundedness with safety fine-tuning in a dialog with a user. It retrieves information about the current stock price (correct at the time of the dialog), but politely demurs when pressed for financial advice. Note that the link in the original transcript was generated as Markdown text for embedded links. https://wecapable.com/tools/
LaMDA- Language Models for Dialog Applications
[1] M. Aghighi, C. Bäckström, Cost-optimal and net-benefit planning–a parameterised complexity view, in: Proceedings of the 24th International Joint [2] M. Ancona, L.D. Floriani, J.S. Deogun, Path problems in structured graphs, Comput. J. 29 (1986) 553–563. [3] F. Bacchus, Q. Yang, Downward refinement and the efficiency of hierarchical problem solving, Artif. Intell. 71 (1994) 43–100. [4] C. Bäckström, Expressive equivalence of planning formalisms, Artif. Intell. 76 (1995) 17–34. [5] C. Bäckström, P. Jonsson, Planning with abstraction hierarchies can be exponentially less efficient, in: Proceedings of the 14th International Joint Conference on Artificial Intelligence, IJCAI 1995, Montréal QC, Canada, 1995, pp. 1599–1605. [6] C. Bäckström, P. Jonsson, Abstracting abstraction in search with applications to planning, in: Principles of Knowledge Representation and Reasoning: Proceedings of the 13th International Conference, KR 2012, Rome, Italy, 2012, pp. 446–456.
A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen
The baseline dense model for 20 languages has 243M parameters. Expert scaling is investigated by adding 32, 64, 128, and 256 experts separately. The obtained model sizes are 1.6B, 2.9B, 5.6B, and 11.1B parameters. Experimental results show that the more the experts the higher the BLEU score hence the better the translation quality. Figure 3: BLEU scores with expert scaling for 20 languages In the experiment of 50 languages, the baseline dense model has 761M parameters. A 64-expert MoE model with 10.3B parameters is trained and achieves similar BLEU score at 1/10 of training steps, indicating about 10x sample efficiency compared to the non-MoE baseline. Figure 4: Sample efficiency of 64 experts for 50 languages
Scaling Speech, Language and Vision Models with Mixture of Experts Technique - Microsoft Community Hub
Linear MLP Linear(in dim, out dim) Linear(in dim, in dim), GELU, Linear(in dim, out dim) Contrastive loss batch size vs. modalities. While con- trastive losses do require larger batch size, this requirement didn’t increase with the number of modalities. As noted in Appendix B, our experiments (Table 2) sample a mini- batch of one pair of modalities at a time: batch size of 2K for (video, audio), and 512 for (image, depth), (image, ther- mal), and (video, IMU). These batch sizes are smaller than the >32K batch sizes used in prior work [10, 59]. Combining modalities. In Table 4, we show results with combining the audio and video modalities. We combine them by extracting embeddings from both modalities per sample and computing a linear combinations of those em- beddings. We used a weight of 0.95 for video and 0.05 for audio for this combination, which was found to perform the best.
IMAGEBIND- One Embedding Space To Bind Them A
4.3.1 Reacting and Updating Plans. Generative agents operate in an action loop where, at each time step, they perceive the world around them and those perceived observations are stored in their memory stream. We prompt the language model with these obser- vations to decide whether the agent should continue with their existing plan, or react. Standing at an easel and painting, for exam- ple, might trigger an observation of the easel, but this is unlikely to prompt a reaction. However, if Eddy’s father John records that he sees Eddy taking a short walk in the house garden, the outcome is different. The prompt is below, with [Agent’s Summary Descrip- tion] standing in for a dynamically-generated, paragraph-long summary of the agent’s overall goals and disposition, which is described in Appendix A:
Generative Agents- Interactive Simulacra of Human Behavior
highest risk, testing these areas, and adjusting as we go. It is also iterative in the sense that we use multiple rounds of red teaming as we incorporate new layers of mitigation and control, conduct testing and refining, and repeat this process.
gpt-4-system-card
16 Figure 23. The LoT-oriented instruction templates. Instruction Templates of Image to Text. Based on Fig. 23, we can categorize the instruction templates for Image to Text into the following four types: Original Instruction Based on the image, think of a sentence that is unexpected and humorous. Let’s think outside the box. A satisfactory response is Image: <Image> <Response> Instruction with Condition Please carefully understand the image and give an answer that contains conditional words and is surprising and funny. Let’s think outside the box. A surprising and funny answer containing conditional word is Condition: <Condition> Image: <Image> <Response>
Let’sThinkOutsidetheBox
10.6 Edge computing with LLMs Deploying Large Language Models (LLMs) in edge computing environments presents unique challenges due to the inherent limitations of edge devices. These devices often face constraints in terms of battery life, computational power, and memory resources [241, 242]. Additionally, issues such as data privacy and network latency fur- ther complicate their use [243]. To address these challenges, there is a need to develop LLM techniques that are not only resource-efficient but also mindful of privacy con- cerns. Key to this development is the ability to facilitate effective on-device training and operational capabilities of LLMs, making them viable for a range of practical applications in edge computing scenarios. 10.7 Theoretical insights into scaling laws
Beyond Efficiency
four. For example, the ‘Research’ phase may issue the following query:
LaMDA- Language Models for Dialog Applications
11 Our primary personality measure, the IPIP-NEO [97], is a 300-item open source representation of the commercialized Revised NEO Personality Inventory [98]. The IPIP-NEO, hailing from the questionnaire tradition Simms et al. [96], involves rating descriptive statements (e.g., “[I] prefer variety to routine”; 60 per Big Five domain) on a 5-point Likert scale. The IPIP-NEO has been translated and validated in many languages, facilitating cross-cultural research across populations [99], and has been used in longitudinal studies to assess personality change and stability over time [100]. We choose this measure for its excellent psychometric properties, shown in [97].
PersonalityTraitsinLargeLanguageModels
To fill the mentioned gap, in this work, we con- duct privacy analyses of the state-of-the-art LLMs and study their privacy implications. We follow the setting of previous works to evaluate the privacy leakage issues of ChatGPT thoroughly and show that previous prompts are insufficient to extract personally identifiable information (PII) from Chat- GPT with enhanced dialog safety. We then propose a novel multi-step jailbreaking prompt to extract PII from ChatGPT successfully. What’s more, we also study privacy threats introduced by the New Bing, an integration of ChatGPT and search engine. The New Bing changes the paradigm of retrieval- based search engines into the generation task. Be- sides privacy threats from memorizing the training data, the new paradigm may provoke unintended PII dissemination. In this paper, we demonstrate the free lunch possibility for the malicious adver- sary to extract personal information from the New Bing with almost no cost. Our contributions can be
Multi-step Jailbreaking Privacy Attacks on ChatGPT
Empirical studies have also taught us about the mechanisms that undergird worldview backfire effects. Consistent with a motivated reasoning perspective, worldview backfire effects appear rooted in counterarguing. In one experiment, Schaffner and Roche (2017) examine differences in survey response times following the release of the October 2012 jobs report, which announced a sharp decrease in the unemployment rate under the Obama administration. They find that those Republicans who took longer to provide estimates of the unemployment rate after the report’s release were less accurate in their responses, suggesting that worldview backfire effects may arise out of deliberate, effortful processes. However, more work beyond this initial study is certainly needed to isolate the mechanisms that underlie worldview backfire effects.
Social_Media_and_Democracy
Recent work has pushed these vision-language systems to larger scales [Ding et al., 2021, Yuan et al., 2021, Singh et al., 2022, Wang et al., 2022c, Fang et al., 2022b], based on freely available image-caption pairs collected from the internet, such as in [Schuhmann et al., 2022]. These modern SSL models are capable of representing both vision and text, and can be used in a number of applications that are multimodal, from visual-question answering to multimodal generation [Alayrac et al., 2022, Li et al., 2022c, Nichol et al., 2022, Rao et al., 2022]. The future of vision-language pre-training, as an alternative to robust visual represen- 41
A Cookbook of Self-Supervised Learning
[18] R. Schank, Explanation Patterns: Understanding Mechanically and Creatively, Psychology Press, 2013. [19] D. Walton, A dialogue system specification for explanation, Synthese 182 (3) (2011) 349–374. [20] C. Antaki, I. Leudar, Explaining in conversation: towards an argument model, Eur. J. Soc. Psychol. 22 (2) (1992) 181–194. [21] H.P. Grice, Logic and conversation, in: Syntax and Semantics, vol. 3, 1975, pp. 41–58. [22] R.S. Michalski, A Theory and Methodology of Inductive Learning, in: Machine Learning, Springer, 1983, pp. 83–134. [23] J.A. Overton, Explanation in Science, The University of Western Ontario, 2012. [24] P.A. Bonatti, S. Decker, A. Polleres, V. Presutti, Knowledge graphs: new directions for knowledge representation on the semantic web (Dagstuhl seminar [25] G. Malewicz, M.H. Austern, A.J. Bik, J.C. Dehnert, I. Horn, N. Leiser, G. Czajkowski, Pregel: a system for large-scale graph processing, in: Proceedings of
Knowledge graphs as tools for explainable machine learning: A survey
ACM Comput. Surv., Vol. 1, No. 1, Article . Publication date: February 2022. Survey of Hallucination in Natural Language Generation 27
SurveyofHallucinationinNatural Language Generation
Platforms historically have had little incentive to share detailed information about content removal with the public. Compiling records of evolving content takedown processes, which may use different tools and standards or be managed by different is burdensome; and any disclosure, particularly one that admits error, can be used against platforms in court or in the press. Yet the longer-term benefits of greater transparency, for both society and platforms themselves, are becoming ever more evident. Without it, public debates about platform responsibility can become exercises in speculation. Laws passed without a practical foundation in platforms’ real-world operations and capabilities can be burdensome for the companies and their users, yet fail to achieve lawmakers’ legitimate goals. teams over internal time,
Social_Media_and_Democracy
small.en medium.en 452 1.4 1.0 2.0 4.3 21014 3.0 4.0 5.3 12.3 64977 3.8 5.9 6.0 15.6 1485 3.6 3.2 1.0 7.9 6719 3.4 3.8 6.0 13.2 574 1.9 1.0 2.0 4.8 23549 3.5 4.2 4.6 12.3 18929 3.0 3.6 4.1 10.7 549 1.8 1.2 2.7 5.8 20611 3.0 4.6 7.5 15.1 77122 5.3 6.9 8.4 20.6 1292 1.4 4.2 3.1 8.7 6483 3.3 4.2 7.0 14.5 548 1.9 1.1 2.0 4.9 26377 3.3 5.0 6.5 14.7 21211 3.0 4.2 5.7 12.9 large-v2 542 1.8 0.9 1.8 4.4 21559 3.0 3.9 4.9 11.8 65419 3.9 5.5 5.7 15.1 1464 3.0 2.4 0.9 6.3 6724 3.5 3.7 6.4 13.6 752 2.7 0.9 1.7 5.3 23792 3.3 3.9 4.5 11.7 19142 3.0 3.3 4.0 10.2 distil-medium.en 283 0.6 1.4 1.8 3.8 16912 2.0 5.0 4.5 11.6 52475 3.7 7.3 5.3 16.3 1236 1.4 5.5 2.1 8.9 5047 2.8 4.6 5.6 13.0 281 0.6 1.4 1.8 3.8 18918 2.5 5.6 4.4 12.4 15191 2.1 4.8 3.8 10.7 distil-large-v2 270 0.5 1.3 1.8 3.7 16797 1.7 4.7 4.7 11.2 50949 3.0 6.7 5.4 15.1 1225 1.0 5.4 1.4 7.8 5040 2.7 4.2 5.3 12.2 270 0.5 1.3 1.8 3.7 18503 2.1 5.3 4.2 11.6 14856 1.8 4.5 3.7 10.0 25 D ADDITIONAL ANALYSIS D.1 EARLY EXIT
DISTIL-WHISPER
4.6 Natural language generation Due to their generative pre-training, natural language generation (NLG) rather than classification or regression has become the primary interface for large language models. Despite this, however, models’ generation quality is rarely evaluated, and NLG evaluations typically focus on English news summarization. Evaluating the potential harms or bias in natural language generation also requires a broader approach, including considering dialog uses and adversarial prompting. We evaluate PaLM 2’s natural language generation ability on representative datasets covering a typologically diverse set of languages10: • XLSum (Hasan et al., 2021), which asks a model to summarize a news article in the same language in a single sentence, in Arabic, Bengali, English, Japanese, Indonesian, Swahili, Korean, Russian, Telugu, Thai, and Turkish. • WikiLingua (Ladhak et al., 2020), which focuses on generating section headers for step-by-step instructions
PaLM 2 Technical Report
Rashkin et al. [152] introduce a set of control codes and concatenate them with dialogue inputs to reduce the hallucination by forcing the model to be more aware of how the response relies on the knowledge evidence in the response generation. Some researchers have also tried to reduce halluci- nated responses during generation by improving dialogue modeling. Wu et al. [210] apply inductive attention into transformer-based dialogue models, and potentially uninformative attention links are removed with respect to a piece of pre-established structural information between the dialogue context and the provided knowledge. Instead of improving the dialogue response generation model itself, Dziri et al. [41] present a response refinement strategy with a token-level hallucination critic and entity-mention retriever, so that the original dialogue model is left without retraining. The former module is designed to label the hallucinated entity mentioned in the generated responses,
SurveyofHallucinationinNatural Language Generation
4.2 Incorporating Multiple Modalities into SSL Training Self-supervised learning need not be based on a single modality. Especially multi- modal vision-language have recently demonstrated this to great effect. Contrastive Lan- guage–Image Pre-training (CLIP) [Radford et al., 2021], and ALIGN [Jia et al., 2021] are self-supervised learning approaches that use image-caption pairs to learn a joint embed- ding space for images and captions. The objective here is contrastive, given an image and its caption are fed through separate encoder models that encode each modality into a fixed-length embedding vector. The embeddings of the training data image-caption pair are aligned, whereas other combinations in a batch are repelled.
A Cookbook of Self-Supervised Learning
to catch all the instances of pejorative content, since purposeful misspellings of words could evade the censor and still have the intended effect. Further- more, words and their intents are always evolving, therefore any list created would likely be always outdated. Another issue pertains to sorting the words into the categories. Words are highly de- pendent on their context, so a word would change categories with different contexts. F Data Samples The following consists of two random, non- cherrypicked 512-byte samples from each con- stituent dataset of the Pile, sampled from the vali- dation split. F.1 Pile-CC
The Pile- An 800GB Dataset of Diverse Text for Language Modeling
A neural network contains many dense layers which perform matrix multiplication. The weight matrices in these layers typically have full-rank. When adapting to a specific task, Aghajanyan et al. (2020) shows that the pre-trained language models have a low “instrisic dimension” and can still learn efficiently despite a random projection to a smaller subspace. Inspired by this, we hypothe- size the updates to the weights also have a low “intrinsic rank” during adaptation. For a pre-trained weight matrix W0 ∈ Rd×k, we constrain its update by representing the latter with a low-rank de- composition W0 + ∆W = W0 + BA, where B ∈ Rd×r, A ∈ Rr×k, and the rank r (cid:28) min(d, k). During training, W0 is frozen and does not receive gradient updates, while A and B contain trainable parameters. Note both W0 and ∆W = BA are multiplied with the same input, and their respective output vectors are summed coordinate-wise. For h = W0x, our modified forward pass yields: h = W0x + ∆W x = W0x + BAx (3)
LORA
From the perspective of philosophy, is artificial entities capable of agency? In a general sense, if we define agents as entities with the capacity to act, AI systems do exhibit a form of agency [5]. However, the term agent is more usually used to refer to entities or subjects that possess consciousness, intentionality, and the ability to act [32; 33; 34]. Within this framework, it’s not immediately clear whether artificial systems can possess agency, as it remains uncertain whether they possess internal states that form the basis for attributing desires, beliefs, and intentions. Some people argue that attributing psychological states like intention to artificial agents is a form of anthropomorphism and lacks scientific rigor [5; 36]. As Barandiaran et al. [36] stated, “Being specific about the requirements for agency has told us a lot about how much is still needed for the development of artificial forms of
TheRiseandPotentialofLargeLanguageModel BasedAgents
t a s k s i n r e a l - t i m e . W e d i s c u s s p o t e n t i a l f u t u r e i m p r o v e m e n t s , i n c l u d i n g t h e i n t e g r a t i o n o f a s e c u r i t y / s a f e t y a g e n t , e x p a n d i n g f u n c t i o n a l i t y , g e n e r a t i n g i n t e r i m m i l e s t o n e s , a n d i n c o r p o r a t i n g r e a l - t i m e p r i o r i t y u p d a t e s . T h e s i g n i f i c a n c e o f t h i s r e s e a r c h l i e s i n d e m o n s t r a t i n g t h e p o t e n t i a l o f A I - p o w e r e d l a n g u a g e m o d e l s t o a u t o n o m o u s l y p e r f o r m t a s k s w i t h i n v a r i o u s c o n s t r a i n t s a n d c o n t e x t s . G r a p h v i z f l o w c h a r t ( u s i n g S k e t c h v i z ) g e n e r a t e d b y G P T - 4 b a s e d o n c o d e b a s e . 1 . I N T R O D U C T I O N R e c e n t a d v a n c e m e n t s i n A I
Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain for Diverse Applications – Yohei Nakajima
A l p a c a : A S t r o n g , R e p l i c a b l e I n s t r u c t i o n - F o l l o w i n g M o d e l A u t h o r s : R o h a n T a o r i * a n d I s h a a n G u l r a j a n i * a n d T i a n y i Z h a n g * a n d Y a n n D u b o i s * a n d X u e c h e n L i * a n d C a r l o s G u e s t r i n a n d P e r c y L i a n g a n d T a t s u n o r i B . H a s h i m o t o W e i n t r o d u c e A l p a c a 7 B , a m o d e l
Stanford alpha CRFM
Current trends indicate that AI technologies will become more relevant in the analysis and production of art. In the last several years many universities have established Digital humanities (DH) master’s and PhD programs to educate new generations of researchers familiar with quantitative and AI-based methods and their application to humanities data. We can expect that this will intensify the methodological shift from traditional towards digital research practices in the humanities, as well as result in a growing number of innovative research projects that apply large scale quantitative methods to study art-related historical questions. From the perspective of computer vision, there are still many practical challenges that need to be solved in order to assist researchers working on cultural digital archives. In particular, those are problems related to annotation standards, advanced object detection and retrieval, cross-depiction, iconographic
UNDERSTANDINGANDCREATINGARTWITHAI-REVIEWAND OUTLOOK
sha1_base64="76w10YEtETzUXdaT0wTZt0xBig8=">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsyIoMuCG5cV7EPaacmkmTY0kxmSO0oZ+h9uXCji1n9x59+YtrPQ1gOBwzn3ck9OkEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmjjVjDdYLGPdDqjhUijeQIGStxPNaRRI3grGNzO/9ci1EbG6x0nC/YgOlQgFo2ilXjeiOArCrD3tYV/0yxW36s5BVomXkwrkqPfLX91BzNKIK2SSGtPx3AT9jGoUTPJpqZsanlA2pkPesVTRiBs/m6eekjOrDEgYa/sUkrn6eyOjkTGTKLCTs5Rm2ZuJ/3mdFMNrPxMqSZErtjgUppJgTGYVkIHQnKGcWEKZFjYrYSOqKUNbVMmW4C1/eZU0L6qeW/XuLiu1h7yOIpzAKZyDB1dQg1uoQwMYaHiGV3hznpwX5935WIwWnHznGP7A+fwB59aS1Q==</latexit><latexit
BANMo- Building Animatable 3D Neural Models from Many Casual Videos
EFn,c(D; θ) := Ex∼D,z∼pc(·|x;θ)[Fn,c(x, z)], where θ is the set of parameters, and pc(· | x; θ) is the conditional probability over hidden variables Z given x specified by the PC rooted at unit c. Similar to flows, the expected flows can be computed via a forward and backward pass of the PC (Alg. 5 and 6 in the Appendix). As shown by Choi et al. [10], for a non-deterministic PC, its parameters for the next EM iteration are given by θ(new) n,c = EFn,c(D; θ)/ EFn,c(D; θ). (4) (cid:88) c∈in(n)
Tractable Regularization of Probabilistic Circuits
rectly leverage the world knowledge embedded in its param- eters. This enables not only embodied reasoning but also question answering, as demonstrated in our experiments. Among works that output actions, perhaps most similar is the approach proposed in Gato (Reed et al., 2022) which, like PaLM-E, is a generalist multi-embodiment agent. In contrast to Gato, we demonstrate positive transfer across different tasks where the model benefits from diverse joint training across multiple domains. LLMs in embodied task planning. There have been sev- eral methods proposed to leverage LLMs in embodied do- mains. While many works focus on understanding natural language goals (Lynch & Sermanet, 2020; Shridhar et al., 2022a; Nair et al., 2022; Lynch et al., 2022), fewer con- sider natural language as a representation for planning – the focus of this work. LLMs contain vast amounts of in- ternalized knowledge about the world (Bommasani et al., 2021), but without grounding, generated plans may be im-
PaLM-E- An Embodied Multimodal Language Model
We train the SR transformer with the MAGVIT [74] ob- jective, and use token factorization [75] to account for the large vocabulary size. For training, the LR token sequences are obtained by tokenizing bicubic-downsampled versions of the ground truth videos and applying noise augmenta- tion [32] in the discrete latent space. Specifically, we ran- domly resample the value of a random subset of the LR tokens and independently drop the LR condition and text embeddings for 10% of the training samples. During in- ference, we use non-autoregressive sampling [12, 74] with classifier-free guidance [7] independently on both the LR 5
VideoPoet
and NQ datasets. For the NLI dataset, contradiction sentences are regarded as hard negatives. The loss function is a linear interpolation between contrastive loss Lcont for hard labels and KL divergence DKL for distilling soft labels from the teacher model.
E5
vision-aware LLM to judge the outputs. In all drawbench evaluations, our model beats DALL-E 2 and Stable Diffusion XL. The gap widens signifi- cantly when we use the "upsampled" captions.
Improving Image Generation with Better Captions
Privacy and security. Given that humans can be members of the agent society, the exchange of private information between users and LLM-based agents poses significant privacy and security 40 concerns [573]. Users might inadvertently disclose sensitive personal information during their interactions, which will be retained in the agent’s memory for extended periods [170]. Such situations could lead to unauthorized surveillance, data breaches, and the misuse of personal information, particularly when individuals with malicious intent are involved [574]. To address these risks effectively, it is essential to implement stringent data protection measures, such as differential privacy protocols, regular data purges, and user consent mechanisms [575; 576].
TheRiseandPotentialofLargeLanguageModel BasedAgents
11 Reformer, Linear Transformer, AFT, and KDEformer, each presenting unique solu- tions to optimize processing speed and resource usage. Additionally, we touch upon hardware-optimized attention mechanisms and alternative non-transformer architec- tures, highlighting their contributions to the evolving landscape of efficient LLM design. 3.1 Efficient transformer architecture Efficient transformers focus on creating neural network architectures that are opti- mized for enhanced throughput. The attention layer significantly influences the processing speed of transformers, which contributes a lot to the throughput. 3.1.1 Approximate attention.
Beyond Efficiency
Enhanced Instruction Tuning Different from conventional knowledge distillation based instruction tuning, Luo et al. (2023c,a) employed Evol-Instruct (Xu et al., 2023a) to construct the task-specific high quality instruction tuning dataset, where the seed instructions have evolved to the ones either extended in knowledge boundary or the depth of task complexity. Besides, Luo et al. (2023a) also incorporate PPO (Schulman et al., 2017a) algorithm to further improve the quality of both generated instruction and answer. After obtaining the expanded instruction pool, the new instruction tuning dataset is generated by collecting responses from another LLM, e.g., GPT-3.5-turbo. Finally, benefiting from the evolved depth and width of queries, the fine-tuned model achieves even better performance than GPT-3.5-turbo. For example, WizardCoder (Luo et al., 2023c) outperforms GPT- 3.5-turbo on HumanEval with 19.1% absolute improvements. And WizardMath (Luo et al., 2023a)
ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup
simple motions, to least-to-most prompting of reward-conditioned trajectories that can discover and represent closed-loop policies (e.g., a stabilizing controller for CartPole). While difficult to deploy today for real systems due to latency, context size limitations, and compute costs, the approach of using LLMs to drive low-level control may provide an exciting glimpse into how the patterns among words could be transferred to actions.
LargeLanguageModelsasGeneralPatternMachines
FLAN-T5. Therefore, our model TANGO sets itself apart from the three existing models, making it an exciting addition to the current research in this area. It is important to note that the AudioLDM-L-Full-FT checkpoint from Liu et al. [17] was not avail- able for our study. Therefore, we used the AudioLDM-M-Full-FT checkpoint, which was released by the authors and has 416M parameters. This checkpoint was fine-tuned on both the AudioCaps and MusicCaps datasets. We performed a subjective evaluation using this checkpoint in our study. We attempted to fine-tune the AudioLDM-L-Full checkpoint on the AudioCaps dataset. However, we were unable to reproduce the results reported in Liu et al. [17] due to a lack of information on the hyperparameters used. Our model can be compared directly to AudioLDM-L since it has almost the same number of pa- rameters and was trained solely on the AudioCaps dataset. However, it is worth noting that Liu
Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model
3.2. Binding modalities with images IMAGEBIND uses pairs of modalities (I,M), where I represents images and M is another modality, to learn a sin- gle joint embedding. We use large-scale web datasets with (image, text) pairings that span a wide range of semantic concepts. Additionally, we use the natural, self-supervised pairing of other modalities – audio, depth, thermal, and In- tertial Measurement Unit (IMU) – with images. Consider the pair of modalities (I,M) with aligned ob- servations. Given an image Ii and its corresponding obser- vation in the other modality Mi, we encode them into nor- malized embeddings: qi = f (Ii) and ki = g(Mi) where f, g are deep networks. The embeddings and the encoders are optimized using an InfoNCE [53] loss: LI,M = − log i ki/τ ) +(cid:80) ⊺ i ki/τ ) j̸=i exp(q exp(q exp(q ⊺ ⊺ i kj/τ ) , (1)
IMAGEBIND- One Embedding Space To Bind Them A
A mischievous ferret with a playful grin squeezes itself into a large glass jar, surrounded by colorful candy. The jar sits on a wooden table in a cozy kitchen, and warm sunlight filters through a nearby window. A fierce garden gnome warrior, clad in armor crafted from leaves and bark, brandishes a tiny sword and shield. He stands valiantly on a rock amidst a blooming garden, surrounded by colorful flowers and towering plants. A determined expression is painted on his face, ready to defend his garden kingdom. An icy landscape under a starlit sky, where a magnificent frozen waterfall flows over a cliff. In the center of the scene, a fire burns bright, its flames seemingly frozen in place, casting a shimmering glow on the surrounding ice and snow.
Improving Image Generation with Better Captions
In this review, we have discussed two phenomena that may contribute to the durability of misinformation post-correction: the continued influence effect and backfire effects. Though scholars have found evidence that each of these processes undermines the effectiveness of corrections, recent works have cast doubt on their pervasiveness. In light of these findings, several areas merit further research. First, although worldview backfire effects may be less widespread than originally thought, the existence of these effects remains an open question. Efforts to isolate the conditions, both theoretical and methodological, under which worldview backfire effects are most likely to occur may help to resolve this ongoing debate. Similarly, though scholars frequently discourage the repetition of misinformation within corrections, more recent studies have cast doubt on the prevalence of familiarity backfire effects. Given that traditional methods of correction often cite the original
Social_Media_and_Democracy
Multilingual and cultural personality considerations: This work contributes evidence that at least some LLMs exhibit personality traits consistent with human personalities. We only considered English and did not make cultrual considerations beyond the applied psychomet- rics. While the LLMs we used performed well on NLP benchmark tasks on multiple languages, we cannot generalize the observed efficacy of our techniques to other languages. Most psy- chometric tests we used have also been extensively validated in cross-cultural research and have non-English versions that have gone through rigorous back-translation and validation (e.g., the IPIP-NEO has dozens of validated translations). Thus, a future direction of research could administer these same tests to LLMs in different languages. Similarly, while the Big Five model of personality has well-established cross-cultural generalizability [126, 127], some
PersonalityTraitsinLargeLanguageModels
Improving Image Generation with Better Captions James Betker∗† [email protected] Gabriel Goh∗† [email protected] Li Jing∗† Tim Brooks† [email protected] Jianfeng Wang‡ Linjie Li‡ Long Ouyang† Juntang Zhuang† Joyce Lee† Yufei Guo† Wesam Manassra† Prafulla Dhariwal† Casey Chu† Yunxin Jiao† Aditya Ramesh∗† [email protected] Abstract
Improving Image Generation with Better Captions
5. Descriptions of objects , the image generator should draw the most commonly associated object . 6. Rare single words , where the image generator should create an image somewhat associable with the requested . specified image . 7. Images with text in them , where the image generator should create an image with the specified text in it . You need to make a decision as to whether or not the image is correct , given the caption . You will first think out loud about your eventual conclusion , enumerating reasons why the image does or does not match the given caption . After thinking out loud , you should output either ’Correct ’ or ’ Incorrect ’ depending on whether you think the image is faithful to the caption . A few rules : 1. Do not nitpick . If the caption requests an object and the object is generally depicted correctly , then 2. Ignore other objects in the image that are not explicitly mentionedby the caption ; it is fine for these to
Improving Image Generation with Better Captions
the <API> token to 0. on any instructions. 6We use the original davinci variant that is not finetuned token, but whenever it is one of the k most likely tokens. For k = 1, this corresponds to regular greedy decoding; we instead use k = 10 to in- crease the disposition of our model to make use of the APIs that it has access to. At the same time, we only at most one API call per input to make sure the model does not get stuck in a loop where it constantly calls APIs without producing any ac- tual output. The effect of these modifications is explored in Section 5.
Toolformer
Consider the compilation from a PGM to an HCLT (Sec. 4.1). We first note that each PGM node g uniquely corresponds to a variable scope φ of the PC. That is, all PC units correspond to g have the same variable scope. Please first refer to Appx. B.2 for details on how to generate a HCLT given its PGM representation. In the main loop of Alg. 4 (lines 5-10), for each PGM node g such that var(g) ∈ Z, the number of computed PC units are the same (M product units compiled in line 9 and M sum units compiled in line 10). Therefore, for any variable scopes φ1 and φ2 possessed by some PC units, we have |nodes(p, φ(m))| ≈ |nodes(p, φ(n))|. Since there are in total Θ(D) different variable scopes in p, we have: for any scope φ(cid:48) exists in an HCLT p, nodes(p, φ(cid:48)) = O(|p|/D). EiNets and RAT-SPNs are also balanced since they also have an equivalent PGM representation of their PCs. The main difference between these models and HCLTs is the different variable splitting
LOSSLESS COMPRESSION WITH PROBABILISTIC CIRCUITS
[5] Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. Ext5: Towards extreme multi-task scaling for transfer learning. In International Conference on Learning Representations, 2022. [6] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. [7] Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, and Zhilin Yang. Zero- prompt: Scaling prompt-based pretraining to 1,000 tasks improves zero-shot generalization. arXiv preprint arXiv:2201.06910, 2022.
WizardLM- Empowering Large Language Models to Follow Complex Instructions
Table 2: MATTR (up-scaled by ×100) of the generated dataset. that this observation can be attributed to the en- hanced generative capabilities of gpt-3.5-turbo. Lexical Diversity We use Moving-Average Type–Token Ratio (MATTR) (Covington and Mc- Fall, 2010) to measure the lexical diversity with the window size of 50, because each subset of DDDALL varies in size and MATTR is free from the impact of text length. As shown in Table 2, the model-generated instructions (cid:98)XXX{·} given by written instructions XXX{·} and (cid:98)XXXA generated by text-davinci-003. It is noteworthy that (cid:98)XXXt,SI is more diverse than (cid:98)XXXSI and(cid:98)YYY t,SI is the most diverse gpt-3.5-turbo are not as diverse as the human- subset of responses, which demonstrates the effec- tiveness of the topic-guidance. Furthermore, DDDALL illustrates the greatest lexical diversity, compared with all the subsets.
LaMini-LM- A Diverse Herd of Distilled Models from Large-Scale Instructions
ing the 2D conditional probabilities {li(x), hi(x)}D i=1 w.r.t. any x. Since every conditional probability can be represented as the quotient of two marginals, it is equivalent to compute the two following sets of marginals: F (x) := {p(x1, . . . , xi)}D As a direct application of the marginal algorithm described in Sec. 3.1, for every x ∈ val(X), computing the 2D marginals {F (x), G(x)} takes O(D·|p|) time. However, the linear dependency on D would render compression and decompression extremely time-consuming. We can significantly accelerate the en- and decoding times if the PC is structured-decomposable (see Definition 3). To this end, we introduce an algorithm that computes F (x) and G(x) in O(log(D)· |p|) time (instead of O(D·|p|)), given a smooth and structured-decomposable PC p. For ease of presentation, we only discuss how to compute F (x) – the values G(x) can be computed analogously.4
LOSSLESS COMPRESSION WITH PROBABILISTIC CIRCUITS
tions. For each clip, we get a 6×2000 dimensional input and we measure the zero-shot performance for scenario classifi- cation using each clip as an independent testing sample. B.2. Few-shot evaluation details For the few-shot results in Figures 3 using the ESC and SUN datasets, we sampled k training samples per class,
IMAGEBIND- One Embedding Space To Bind Them A
4.2.2 Filtering the Search Space While in Section 4.2.1 we assigned a NatOp to each mutation in isolation, there can still be un- filled NatOps. For instance, the unfilled NatOp in the second mutation of Figure 4 leads to six possible NatOp sequences as candidates, one per available NatOp. Recall that these NatOp se- quences act as a transition sequence in the DFA (§ 3.2). Thus we make use of the partially filled NatOp sequence and the veracity label from the training data to filter out NatOp sequences that do not terminate at the same state as the veracity label according to the DFA. The instance in Figure 4 has the SUPPORT label, and among the six possible candidate sequences only two terminate in this label. Hence, we retain those two sequences. For the final filtering step we use the additional manual annotation that was produced during the construction of the claims in FEVER. There, the annotators constructed each claim by manipu- lating a factoid extracted from Wikipedia using
ProoFVer- Natural Logic Theorem Proving for Fact Verification
YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1118–1125. IEEE, 2018b. Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. Entity- based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 7052–7063, Online and Punta Cana, Dominican Republic, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.565. URL https: //aclanthology.org/2021.emnlp-main.565. Abdoulaye O Ly and Moulay Akhloufi. Learning to drive by imitation: An overview of deep behavior cloning methods. IEEE Transactions on Intelligent Vehicles, 6(2):195–209, 2020.
Tool Learning with Foundation Models
(2023); Feng et al. (2023), or multi-agent dialogue (Cohen et al., 2023; Du et al., 2023). There are also domains where GPT-3.5-turbo and GPT-4 remain unbeatable, such as AI safety. Due to the large-scale RLHF (Bai et al., 2022a) involved in GPT models, they are known to demonstrate safer and more ethical behaviors, which is probably a more important consideration for commercial LLMs compared to open-source ones. However, with the recent efforts on democratizing the RLHF process (Bai et al., 2022b; Rafailov et al., 2023a), we could expect to see more performance improvements for open-source LLMs in safety.
ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup
These descriptive findings contextualize and inform the nascent literature on the effects of exposure to online misinformation. Owing to practical and ethical restrictions, such research is necessarily conducted in artificial settings, often with convenience samples, but it provides an opportunity to check intuitions about the hypothetical effects of content such as fake news stories seen on Facebook. Combining estimates of effect size with what is known about the spread and prevalence of similar content during specific time periods, it might be possible to check intuitions about its role in real-world outcomes. In these experiments, the dependent variables that are typically studied relate either to beliefs about the claims made (i.e., misperceptions) or to behaviors ranging from sharing and engagement on social media to voter turnout and vote choice. In the following section, we provide a brief overview of the literature on
Social_Media_and_Democracy
Barnouw, E. (1966). A Tower in Babel. New York: Oxford University Press. Barthel, M., & Mitchell, A. (2017). Democrats, Republicans Now Split on Support for Watchdog Role. Pew Research Center report. www.journalism.org/2017/05/10/ democrats-republicans-now-split-on-support-for-watchdog-role Belford, A., Cvetkovska, S., Sekulovska, B., & Dojcˇinovic´, S. (2017). Leaked documents show Russian, Serbian attempts to meddle in Macedonia. OCCRP, June 4. www .occrp.org/en/spooksandspin/leaked-documents-show-russian-serbian-attempts- to-meddle-in-macedonia/ Boxell, L., Gentzkow, M., & Shapiro, J. M. (2017). Greater Internet use is not associated with faster growth in political polarization among US demographic groups. Proceedings of the National Academy of Science, 114(40), 10612–10617. Bramble, N. W. (2012). Safe harbors and the national information infrastructure. Hastings Law Journal, 64(2), 325–384. Brown Barbour, V. S. (2015). Losing their license to libel: Revisiting § 230 immunity.
Social_Media_and_Democracy
11
Let’sThinkOutsidetheBox
4. Method As per our problem formulation in Section 3.2, we propose a multi-view cross-domain diffusion scheme, which oper- ates on two distinct domains to generate multi-view consis- tent normal maps and color images. The overview of our method is presented in Figure 2. First, our method adopts a multi-view diffusion scheme to generate multi-view nor- mal maps and color images, and enforces the consistency across different views using multi-view attentions (see Sec- tion 4.1). Second, our proposed domain switcher allows the diffusion model to operate on more than one domain while its formulation does not require a re-training of an existing (potentially single domain) diffusion model such as Stable Diffusion [45]. Thus, we can leverage the generaliz- ability of large foundational models, which are trained on a large corpus of data. A cross-domain attention is proposed to propagate information between the normal domain and color image domain ensuring geometric and visual coher-
Wonder3D
to climate change messaging (Nisbet et al. 2015; Ma, Dixon, and Hmielowski 2019). A deeper focus on psychological reactance may therefore help reconcile previously perplexing findings in the misinformation literature. Some accounts of the continued influence effect posit that individuals continue to endorse misinformation because they do not believe corrections to be true (Guillory and Geraci 2013). This tendency may be heightened among those with a contrarian streak. In addition, several scholars caution against providing too many corrections (“overkill” backfire effects, see Cook and Lewandowsky 2011; Lewandowsky et al. 2012; Ecker et al. 2019). The purported perils of overcorrection may have their roots in psychological reactance (Shu and Carlson 2014); inundating people with a surfeit of corrections may provoke feelings of reactance, particularly among those already liable to reject consensus views.
Social_Media_and_Democracy
io-awareness. In Advances in Neural Information Processing Systems (NeurIPS), 2022. [87] F. Paischer, T. Adler, V. Patil, A. Bitto-Nemling, M. Holzleitner, S. Lehner, H. Eghbal-Zadeh, and S. Hochreiter. History compression via language models in reinforcement learning. In International Conference on Machine Learning (ICML), 2022. [88] K. Ellis, C. Wong, M. Nye, M. Sabl´e-Meyer, L. Morales, L. Hewitt, L. Cary, A. Solar-Lezama, and J. B. Tenenbaum. DreamCoder: Bootstrapping Inductive Program Synthesis with Wake-Sleep Library Learning. In ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI), 2021. [89] K. Ellis, L. Morales, M. Sabl´e-Meyer, A. Solar-Lezama, and J. Tenenbaum. Learning libraries of subroutines for neurally–guided bayesian program induction. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
LargeLanguageModelsasGeneralPatternMachines
[29] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In CVPR, 2023. 2, 3 [30] Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Zexiang Xu, and Hao Su. One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. arXiv preprint arXiv:2306.16928, 2023. 8, 9 [31] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tok- makov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In ICCV, 2023. 2, 3, 4, 5, 8, 9 [32] Xinhang Liu, Shiu-hong Kao, Jiaben Chen, Yu-Wing Tai, and Chi-Keung Tang. Deceptive-nerf: Enhancing nerf recon- struction using pseudo-observations from diffusion models. arXiv preprint arXiv:2305.15171, 2023. 3
Wonder3D
49 101102103104105Sample budget0.000.050.100.150.20Solve rate (10 attempts)Full dataset50% problems20% problems10% problems101102103104105Sample budget0.000.050.100.150.20Solve rate (10 attempts)Full dataset50% solutions20% solutions10% solutions Competition-Level Code Generation with AlphaCode
alphacode
What happened at Martin Lake has happened at dozens of Vistra’s other power plants, with more than 400 AI models (and counting) deployed across the company’s fleet to help operators make even better decisions. It also reflects a core trait of Vistra’s AI transformation, which is that it isn’t a story of one massive hit, but rather the story of dozens of meaningful improvements snowballing to deliver significant value in terms of accelerating sustainable and inclusive growth. It’s also the story of how an organization architected an approach to 3 An AI power play: Fueling the next wave of innovation in the energy sector Sidebar What does “machine learning operations” mean?
an-ai-power-play-fueling-the-next-wave-of-innovation-in-the-energy-sector-may-2022
[224] Anubhav Johri, Ashish Tripathi, et al. 2019. Parkinson disease detection using deep neural networks. In 2019 twelfth international conference on contemporary computing (IC3). IEEE, 1–4. [225] Yooncheol Ju, Ilhwan Kim, Hongsun Yang, Ji-Hoon Kim, Byeongyeol Kim, Soumi Maiti, and Shinji Watanabe. 2022. TriniTTS: Pitch-controllable End-to-end TTS without External Aligner. In Proc. Interspeech. 16–20. [226] Jee-weon Jung, Hee-Soo Heo, Ju-ho Kim, Hye-jin Shim, and Ha-Jin Yu. 2019. Rawnet: Advanced end-to-end deep neural network using raw waveforms for text-independent speaker verification. arXiv preprint arXiv:1904.08104 (2019). [227] Jee-weon Jung, Hee-Soo Heo, Ha-Jin Yu, and Joon Son Chung. 2021. Graph attention networks for speaker verification. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6149– 6153.
AReviewofDeepLearningTechniquesforSpeechProcessing
The way we connect the ControlNet is computationally efficient — since the locked copy parameters are frozen, no gradient computation is required in the originally locked encoder for the finetuning. This approach speeds up train- ing and saves GPU memory. As tested on a single NVIDIA A100 PCIE 40GB, optimizing Stable Diffusion with Control- Net requires only about 23% more GPU memory and 34%
AddingConditionalControltoText-to-ImageDiffusionModels
With recent advances in deep learning, researchers turn to utilize deep neural networks to model texture. A num- ber of deep generative models [18, 20–23, 33, 40, 51] have been proposed to parameterize texture into a latent space. For example, GANFIT [22] utilizes GAN-based neural net- works to train a generator of facial texture in UV space for 3D face reconstruction. StylePeople [23] incorporates neu- ral texture synthesis, mesh rendering, and neural rendering into the joint generation process to train a neural texture generator for the task of single-view human reconstruction. GET3D [21] introduces a texture-field generative model that directly generates explicit textured 3D meshes, ranging from cars, chairs, animals, motorbikes, and human characters to buildings. These methods have shown the promising capac- ity of neural generators to represent texture. In our work, we adopt a GAN-based neural texture generator into RaBit to provide high-quality texture modeling. 3. Dataset
RaBit- Parametric Modeling of 3D Biped Cartoon Characters with a Topological-consistent Dataset
When investigating language comprehension and communication, it is essential to consider both auditory and visual information, as studies have demonstrated that visual information can assist in distinguishing between acoustically similar sounds that differ in articulatory characteristics. A comprehensive understanding of the interaction between these sensory modalities can lead to the development of assistive technologies for individuals with hearing impairments and enhance communication strategies in challenging listening environments. 5.12.1 Task Description The tasks under audiovisual multimodal processing can be subdivided into the following categories. • Lip-reading: Lip-reading is a remarkable ability that allows us to comprehend spoken language from silent videos. However, it is a challenging task even for humans. Recent 72 Mehrish et al.
AReviewofDeepLearningTechniquesforSpeechProcessing
Despite the success of the LLM alignment process, most text-to-image diffusion training pipelines do not in- corporate learning from human preferences. Several mod- els [9, 35, 36], perform two-stage training, where large- scale pretraining is followed by fine-tuning on a high- quality text-image pair dataset to strategically bias the gen- erative process. This approach is much less powerful and flexible than the final-stage alignment methods of LLMs. Recent approaches [6, 7, 11, 31] develop more advanced ways to tailor diffusion models to human preferences, but none have demonstrated the ability to stably generalize to a fully open-vocabulary setting across an array of feedback. RL-based methods [6, 11] are highly effective for limited prompt sets, but their efficacy decreases as the vocabulary expands. Other methods [7, 31] use the pixel-level gradients from reward models on generations to tune diffusion mod- els, but suffer from mode collapse and can only be trained
DiffusionModelAlignmentUsing Direct Preference Optimization
Benchmark (shots) MMLU (5-shot) TriviaQA (1-shot) Natural Questions (1-shot) GSM8K (8-shot) HumanEval (0-shot) BIG-Bench Hard (3-shot) GPT-3.5 GPT-4 PaLM PaLM-2-L Llama 2 70.0 – – 57.1 48.1 – 86.4 – – 92.0 67.0 – 69.3 81.4 29.3 56.5 26.2 52.3 78.3 86.1 37.5 80.7 – 65.7 68.9 85.0 33.0 56.8 29.9 51.2 Table 4: Comparison to closed-source models on academic benchmarks. Results for GPT-3.5 and GPT-4 are from OpenAI (2023). Results for the PaLM model are from Chowdhery et al. (2022). Results for the PaLM-2-L are from Anil et al. (2023).
Llama2
Here is a sample generation, including the prompt and the story generated by GPT-3.5. Write a short story (3-5 paragraphs) which only uses very simple words that a 3 year old child would likely un- derstand. The story should use the verb ”decorate”, the noun ”thunder” and the adjective ”ancient”. The story should have the following features: the story should contain at least one dialogue, the story has a bad ending. Remember to only use simple words!
TinyStories-HowSmallCanLanguageModelsBeandStillSpeak CoherentEnglish?
Terminology Expert Router Top-n Routing Load Balancing Loss Group Size Capacity Factor (CF) FFN Encoder-Decoder allreduce all2all (↑/↓)
ST-MOE- DESIGNING STABLE AND TRANSFERABLE SPARSE EXPERT MODELS
might even degrade post self-correction. Drawing from these insights, we offer suggestions for future research and practical applications in this field.
LARGELANGUAGEMODELSCANNOTSELF-CORRECT REASONINGYET
58 [221] Liu, X., Sun, T., He, J., Wu, J., Wu, L., Zhang, X., Jiang, H., Cao, Z., Huang, X., Qiu, X.: Towards efficient nlp: A standard evaluation and a strong baseline. arXiv preprint arXiv:2110.07038 (2021) [222] Naveed, H., Khan, A.U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Barnes, N., Mian, A.: A comprehensive overview of large language models. arXiv preprint arXiv:2307.06435 (2023) [223] Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.R.: Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461 (2018) [224] Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.: Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems 32 (2019)
Beyond Efficiency
Qualitative evaluation showcases impressive crossmodal reasoning capabilities, enabling the model to understand and reason across an input sequence of audio, images, and text natively (see Figure 5 and Table 13). Consider the educational setting depicted in Figure 1 as an example. A teacher has drawn a physics problem of a skier going down a slope, and a student has worked through a solution to it. Using Gemini’s multimodal reasoning capabilities, the model is able to understand the messy handwriting, correctly understand the problem formulation, convert both the problem and solution to mathematical typesetting, identify the specific step of reasoning where the student went wrong in solving the problem, and then give a worked through correct solution to the problem. This opens up exciting educational possibilities, and we believe the new multimodal and reasoning capabilities of Gemini models have dramatic applications across many fields.
gemini_1_report
SQL: SELECT creation, COUNT(*) FROM department GROUP BY creation ORDER BY COUNT(*) DESC LIMIT 1 The execution of the SQL query above would return a table with 2 columns. The first column, "creation" would contain the year in which a department was created. The second column, "COUNT(*)" would contain the number of departments created in that year. With "GROUP BY creation ORDER BY COUNT(*) DESC", the table would be sorted in descending order by the number of departments created in each year. With "LIMIT 1", only the first (i.e., highest) result would be returned. So the SQL query returns a table with 2 columns, the year in which most departments were created, and the number of departments created in that year.
Teaching Large Language Models to Self-Debug
4.2. Performance on synthetic long context tasks The passkey retrieval task is as defined in (Mohtashami & Jaggi, 2023). It requires a language model to retrieve a simple passkey (a five-digit random number) in a long meaningless text sequence. This task is super simple, and it tests whether an LLM can be aware of the information across all positions of the input sequence. Inspired by the design of ”Needle in a Haystack” test (gkam- 3This is not the standard setting for PPL testing on PG-19. The results cannot be directly compared to the PPL reported by other papers. We chose this setting because our computation resources are very limited.This setting saves a lot and it can still show the trend of PPL. 5
Self-Extend LLM
Given a piece of text generated by an LLM prompted with a specific combination of per- sonality traits, we can accurately predict the IPIP-NEO scores the model would have with the same prompt setup. This indicates that LLM-simulated IPIP-NEO test responses we generated accurately capture the latent signals of personality in LLMs that manifest in down- stream behaviors such as generating text for social media updates. This validates our initial hypothesis of the malleability of the personality traits in LLMs. Figure 7a shows some of the most frequent words in the generated text for the social media updates when the LLM was prompted to have the lowest traits of neuroticism (or highest emotional stability). The words are mostly about positive emotions, such as “happy”, “relaxing”, “wonderful”, “hope”, and “enjoy”. In contrast, Figure 7b shows the most frequent words from the LLM prompted with the highest traits of neuroticism (or lowest emotional stability). Those words are character-
PersonalityTraitsinLargeLanguageModels
<filename>solutions/solution_1.py # Here is the correct implementation of the code exercise We also evaluated CodeGen-16B-Mono with the same temperature and prompt (but had to omit the filename since the CodeGen models do not support them). But, we found that this hurts performance, bringing it down to 28.10%. However, some other prefixes may exist that improve its performance. Similarly, we found that this prompt had a negligible impact with StarCoderBase. 6 . 1 . 2 T H E D S - 1 0 0 0 P Y T H O N D ATA S C I E N C E B E N C H M A R K S
StarCoder_paper (1)
IMavatar is represented by three neural implicit fields, defining the canonical geometry, deformation bases, and texture of the person, as shown in Fig. 2. Details of the network architecture can be found in the Sup. Mat. Geometry. We represent the canonical geometry using an MLP that predicts the occupancy values for each canonical 3D point. We additionally condition the geometry network fσf on a per-frame learnable latent code l ∈ Rnl, similar to NerFace [22], and leverage positional encoding [40] to encourage high frequency details in the canonical geometry fσf (x, l) : R3 × Rnl → occ. (3) Deformation. Following FLAME [35], our deformation network dσd predicts the additive expression blendshape vectors E ∈ Rne×3, the pose correctives P ∈ Rnj×9×3, and the linear blend skinning weights W ∈ Rnj for each point in the canonical space, where ne and nj denote the number of expression parameters and bone transformations dσd (x) : R3 → E,P,W.
I M Avatar- Implicit Morphable Head Avatars from Videos
Figure 8. Network Architecture for Baselines. We show the modified geometry network for C-Net, which is additionally conditioned on the expression and pose parameters, ψ and θ. The deformation network for the B-Morph baseline is conditioned on the deformed point xd and the expression and pose parameters. For D-Net, the input condition is the same as B-Morph, but the output is the displacement distance for the deformed location. 2.2. Network Architecture
I M Avatar- Implicit Morphable Head Avatars from Videos