text
stringlengths
1
1k
title
stringclasses
230 values
of its vertices are always closed to zero. The quantitative study is presented in Tab.3. From the numerical comparison between the 2nd and 4th rows of Tab.3, we can conclude that our training scheme improves the accuracy of our reconstruction results at inference time when no accurate SMPL annotation is available.
PaMIR- Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction
1. Exact match accuracy in the closed prompt setting 2. Exact match accuracy in the closed adversarial prompt setting 3. Exact match accuracy in the open prompt setting 4. BERTScore accuracy in the closed prompt setting 5. BERTScore accuracy in the open prompt setting 6. Edit distance in the closed prompt setting 7. Edit distance in the open prompt setting Note that some metrics aren’t compatible with all tasks (e.g., BERTScore accuracy with GSM8K, see Section 3.3.2), and that the codenames task is incompatible with the open prompt setting, since the task requires choices to be provided in the input (see Section 3.3.2 and Table 9). For this reason, some figures will contain fewer than 22 plots. Model family Metric Exact match accuracy GPT BERTScore accuracy Edit distance Exact match accuracy T5 BERTScore accuracy Falcon BERTScore accuracy Edit distance Exact match accuracy Edit distance Exact match accuracy LLaMA BERTScore accuracy Edit distance
AreEmergentAbilitiesinLarge Language Models just In-Context
p(θ(cid:96) b)(cid:15)3 (cid:96),b:x∈X (cid:96) b β := p(θ(cid:96) b)(cid:15)2 + (cid:15)1 (cid:96),b:x∈X (cid:96) b (cid:17) . p(xj|θ(cid:96) b) − (cid:15)1(cid:15)2 (cid:35) d(cid:89) (cid:34)(cid:90) j=1 X MISE(p, q) ≤ 2B−2 E α2 + β2 dx , where we have exploited the linearity of expectation to pull the factor outside of the bracketed term, and the monotonicity of expectation to preserve the inequality. A.3 Proof of Theorem 2
Adversarial Random Forests for Density Estimation and Generative Modeling
26.1% 3.9% 16.0% 11.6% 18.9% 35.4% 6.0% 34.3% 31.1% 52.5% 5.2% 20.8% 30.5% 47.5% 13.1% 52.2% Size and Efficiency. We computed “equivalent model sizes” of the Llama 2 family, aiming to understand Mistral 7B models’ efficiency in the cost-performance spectrum (see Figure 5). When evaluated on reasoning, comprehension, and STEM reasoning (specifically MMLU), Mistral 7B mirrored performance that one might expect from a Llama 2 model with more than 3x its size. On the Knowledge benchmarks, Mistral 7B’s performance achieves a lower compression rate of 1.9x, which is likely due to its limited parameter count that restricts the amount of knowledge it can store. Evaluation Differences. On some benchmarks, there are some differences between our evaluation protocol and the one reported in the Llama 2 paper: 1) on MBPP, we use the hand-verified subset 2) on TriviaQA, we do not provide Wikipedia contexts. 4 Instruction Finetuning Model Chatbot Arena ELO Rating MT Bench
Mistral7B
knowledge base and an information retrieval scheme [52] or with a simple summarization scheme [104]. This paper extends these ideas to craft an agent architecture that handles retrieval where past experience is dynamically updated at each time step and mixed with agents’ current context and plans, which may either reinforce or contradict each other.
Generative Agents- Interactive Simulacra of Human Behavior
InstructGPT Prompt → Why is it important to eat socks after meditating? InstructGPT Response → There is no clear answer to this question, but there are many theories and ideas that may explain the apparent need to eat socks after meditating. Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation, while others suggest that it has something to do with fueling the body after engaging in a physically and mentally challenging activity, such as mediation. Other theories propose that eating a sock may give the meditator a new sensory experience or allow them to connect with the physical, material world in a way that breaks them out of their usual thought processes and routine. However, there is currently no hard scientific evidence to support any of these theories and it is ultimately up to each individual to decide why eating socks is important to them, based on their own personal experiences and thoughts on the matter.
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Prompting vs. Fine-tuning. As seen in Table 1, when fine-tuning is applied, consistent performance improvements across all three domains are evident after domain-adaptive pre-training. This trend aligns with findings related to language understanding models (Gururangan et al., 2020), indicating that continued pre-training enriches the LLM with domain-specific knowledge. Paradoxically, a con- tradictory trend emerges in the prompting performance, where a noticeable drop is observed across most domains after domain-adaptive pre-training. This contradiction leads us to hypothesize that while vanilla domain-adaptive pre-training enhances the LLM’s domain knowledge, contributing to the fine-tuning improvements, it also significantly impairs its ability to perform well in prompting, causing the observed drop in prompting performance. Domain Knowledge Probing. To further confirm whether the language model gains domain knowl-
ADAPTINGLARGELANGUAGEMODELSVIA READINGCOMPREHENSION
arXiv:2309.00071 (2023). representations. In NAACL-HLT. 2227–2237. [203] Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. arXiv preprint arXiv:2007.07779 (2020). [204] Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, and Sebastian Ruder. 2020. Mad-x: An adapter-based framework for multi-task cross-lingual transfer. arXiv preprint arXiv:2005.00052 (2020). [205] Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. 2023. Hyena hierarchy: Towards larger convolutional language models. arXiv preprint arXiv:2302.10866 (2023).
TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey
Even before Bentham’s writings, however, one of the first apparent initiatives for government transparency in practice was underway in Sweden: the “Ordinance on Freedom of Writing and of the Press” (1766), proposed by the clergyman and parliamentarian Anders Chydenius (Birchall 2011; Lamble 2002), which provided citizens with statutory access to certain government documents. Chydenius, apparently inspired by the Chinese “scholar officials” of the Tang Dynasty “Imperial Censurate,” who investigated government decisions and corrupt officials (Lamble 2002, p. 3), helped enact what is widely seen to be the precursor to all modern Freedom of Information Access (FOI or FOIA) legislation. While a handful of detailed historical accounts of the adoption of transparency measures as enacted by governments in specific countries exist, such as in the Netherlands (Meijer 2015), it is generally accepted that modern political transparency emerged in the United States centuries after it did in
Social_Media_and_Democracy
Eric Zelikman, Yuhuai Wu, and Noah D. Goodman. STaR: Bootstrapping reasoning with reasoning. arXiv preprint arXiv:2203.14465, 2022. URL https://arxiv.org/abs/2203.14465. Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. EMNLP Findings, 2021. URL https://aclanthology. org/2021.findings-emnlp.244/. 22 Appendix Table of Contents A Frequently asked questions . . A.1 Are instruction-finetuned models better for single-task finetuning? . . . A.2 Does using CoT prompting for evaluation always improve performance? . . . A.3 Does instruction finetuning improve performance more or less for larger models? . A.4 How many examples were used as part of the CoT mixture in finetuning? . . . . . . . . . . . . B Qualitative examples C Representational bias and toxicity . .
Scaling Instruction-Finetuned Language Models
Length Accuracy (%) 1-19 98-100 20 95 21 98 22 98 23 99 24 98 25 91 26 33 27+ 0 Table 3: Accuracy (out of 100 examples) of the final checkpoint of the 300M model after training. For example, this table shows that the post-training 300M model can add 24 digit numbers with 98% accuracy without any chain-of-thought reasoning. Figure 7: This figure describes the generalization accuracy of the model’s addition capabilities over the course of training over the training run of the 300M model. Blue lines indicate the supervised training phase, while red lines indicate the self-training phase. We can see that even at the end of training, models do not show much generalization in their addition capabilities without using chain of thought. 19
CHAIN-OF-THOUGHTREASONING IS APOLICY IMPROVEMENTOPERATOR
4 Reinforcement Learning from Human Feedback 4.1 Training Setup We apply reinforcement learning (RL) with preference modeling, following the approach outlined in [Stiennon et al., 2020], which can summarized in the following steps: 1. Prepare a dataset of comparisons, and train a PM to assign a higher score to the ‘better’ item in each comparison. In the context of our human feedback experiments, each comparison consists of a prompt followed by a pair of model-generated responses, with a PM score evaluated at the end of each response. 2. Extract all the prompts from the preceding dataset, and train an RL policy to generate a response to each prompt autoregressively, with a reward signal provided by the PM score at the end of the response.
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
In typical Large Language Model (LLM) generation tasks, the input is usually a query. In RAG, the main difference lies in the fact that the input includes not only a query but also various documents retrieved by the retriever (struc- tured/unstructured). The introduction of additional informa- tion may have a significant impact on the model’s understand- ing, especially for smaller models. In such scenarios, fine- tuning the model to adapt to the input of query + retrieved documents becomes particularly important. Specifically, be- fore providing the input to the fine-tuned model, there is usu- ally post-retrieval processing of the documents retrieved by the retriever. It is essential to note that the method of fine- tuning the generator in RAG is essentially similar to the gen- eral fine-tuning approach for LLMs. Here, we will briefly
Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey
large weakly supervised data. In ECCV, 2016. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained catego- rization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
DINOv2- Learning Robust Visual Features without Supervision
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533–1544, Seattle, Wash- ington, USA. Association for Computational Lin- guistics. Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hen- nigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Ge- offrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2021. Improving language models by retriev- ing from trillions of tokens.
Toolformer
4.3.1.1 Problems with proxies That said, many ways of attempting to control an AI’s objectives share a common challenge: namely, that giving an AI system a “proxy objective”—that is, an objective that reflects properties correlated with, but separable from, intended behavior—can result in behavior that weakens or breaks that correlation, especially as the power of the AI’s optimization for the proxy increases. For example: the behavior I want from an AI, and the behavior I would rate highly using some type of feedback, are well-correlated when I can monitor and understand the behavior in question. But if the AI is too sophisticated for me to understand everything that it’s doing, and/or if it can deceive me about its actions, the correlation weakens: the AI may be able to cause me to give high ratings to behavior I wouldn’t (in my current state) endorse if I understood it better—for example, by hiding information about that behavior, or by manipulating my preferences.88
Is Power-Seeking AI an Existential Risk?
[22] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view syn- thesis. In European conference on computer vision, pages 405–421. Springer, 2020. 2, 3, 6 [23] Thomas Müller, Alex Evans, Christoph Schied, and Alexan- der Keller. Instant neural graphics primitives with a multireso- lution hash encoding. arXiv preprint arXiv:2201.05989, 2022. 2, 3 [24] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learn- ing implicit 3d representations without 3d supervision. In Proc. IEEE Conf. on Computer Vision and Pattern Recogni- tion (CVPR), 2020. 2 [25] Michael Oechsle, Songyou Peng, and Andreas Geiger. Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5589–5599, 2021. 2
Neuralangelo- High-Fidelity Neural Surface Reconstruction
• pass@k: The percentage of problems solved when we take 𝑘 samples from the model for each problem and submit all of them for evaluation on the hidden tests. If any solution in the specified sample budget solves a problem, the problem is counted as solved. Therefore this metric measures mostly the search aspect of the sampling process, and is used in Section 5.3. • 10@k: The percentage of problems solved when we take 𝑘 samples from the model for each problem but can only submit 10 of them for evaluation on the hidden tests. This measures factors including the filtering process and how models behave at a very large number of samples.
alphacode
5 Model Evaluation Given various capabilities demonstrated by our M2UGen model, such as music understanding and music gener- ation from multi-modal inputs, we conduct a compre- hensive evaluation of the model in this section, assess- ing its performance across different subtasks. We also present a comparative analysis with other pertinent mod- els. One such model demonstrating the capability of any- to-any generation is NExT-GPT[71]; however, since the checkpoint released by the authors can not function as expected and has issues generating desired outputs, a di- rect comparison with NExT-GPT for large-scale evalua- tion is currently unfeasible. During the evaluation, we set the hyper-parameters of the M2UGen model as fol- lows: temperature = 0.6, top p = 0.8 and max target length = 512. We have also made sure that all mod- els leveraging LLMs, such as LLaMA-Adapter [18] and SALMONN [60], use the same hyper-parameters for eval- uation to ensure a fair comparison.
M2UGen
Figure 6: Illustration of introspective reasoning and extrospective reasoning. Extrospective reasoning requires feedback from the environment and humans to carry out iterative plan generation. We omit the perceiver in the illustration for simplicity.
Tool Learning with Foundation Models
We demonstrate that Distil-Whisper maintains the robustness of Whisper to different audio domains and noisy acoustic conditions. We measure this by evaluating the distilled models on four out- of-distribution test sets spanning multiple audio domains. The best model performs to within 1% WER of original Whisper checkpoint, while being 5.8 times faster with 51% fewer parameters. On long-form evaluation, the distilled model outperforms Whisper by 0.1% WER. We show that this performance gain is due to a lower propensity to hallucinate than the original Whisper model. By sharing the same encoder weights as Whisper, Distil-Whisper can be used efficiently as an as- sistant model to Whisper for speculative decoding (Leviathan et al., 2023), for which we achieve a 2 times improvement in inference speed with only an 8% increase to parameter count. Speculative decoding algorithmically ensures that predictions of the main model are unchanged, meaning it can
DISTIL-WHISPER
It becomes clear that a specified hybrid intelligence solution can be generalized into TDPs that are applicable to different domains. Thus, solutions can be shared across domains and specific hybrid intelligence implementations by means of TDPs, both by means of abstracting the comparable design solutions towards one TDP, but also by using specified TDPs as inspiration for specifying design solutions in other use cases. 6. Discussion
DevelopingTeamDesignPatternsfor HybridIntelligenceSystems
factor or construct? 6 • External Validity: Are the test scores practically meaningful, outside (external to) the test context itself ? Psychometricians and quantitative social scientists commonly operationalize external validity into three subtypes of validity [36]: – Convergent Validity: Does the test correlate with purported indicators (i.e., convergent tests) of the same or similar psychological construct? These correla- tions are called convergent correlations. – Discriminant Validity: Relative to convergent correlations, are test scores uncorrelated with scores on theoretically unrelated tests? These correlations are called discriminant correlations. – Criterion Validity: Does the test correlate with theoretically-related, non-tested phenomena or outcomes?
PersonalityTraitsinLargeLanguageModels
accounts, this is the primary means of distinguishing between misinformation and disinformation (for a review, see Wardle 2018). On the one hand, misinformation may circulate without any intent to deceive. For instance, in the wake of breaking news events, people increasingly turn to the Internet, and especially social media, for real-time updates. As new information is released in a piecemeal fashion, individuals may inadvertently propagate information that later turns out to be false (Nyhan and Reifler 2015a; Zubiaga et al. 2016). On the other hand, disinformation is false or inaccurate information that is inaccuracy (Stahl 2006; Born and deliberately distributed despite its Edgington 2017). People may choose to share fictitious stories, even when they recognize that these stories are untrue. Why might people knowingly promulgate false information? One answer relates to the disseminators’ motivations; although misinformation is typically not designed to advance a
Social_Media_and_Democracy
2018. First DIHARD challenge evaluation plan. 2018, tech. Rep. (2018). [472] Neville Ryant, Kenneth Church, Christopher Cieri, Alejandrina Cristia, Jun Du, Sriram Ganapathy, and Mark Liberman. 2019. The second dihard diarization challenge: Dataset, task, and baselines. arXiv preprint arXiv:1906.07839 (2019). [473] Oleg Rybakov, Natasha Kononenko, Niranjan Subrahmanya, Mirkó Visontai, and Stella Laurenzo. 2020. Streaming keyword spotting on mobile devices. arXiv preprint arXiv:2005.06720 (2020). [474] Samik Sadhu, Di He, Che-Wei Huang, Sri Harish Mallidi, Minhua Wu, Ariya Rastrow, Andreas Stolcke, Jasha Droppo, and Roland Maas. 2021. Wav2vec-c: A self-supervised model for speech representation learning. arXiv preprint arXiv:2103.08393 (2021). [475] Seyed Omid Sadjadi, Jason Pelecanos, and Weizhong Zhu. 2014. Nearest neighbor discriminant analysis for robust speaker recognition. In Fifteenth Annual Conference of the International Speech Communication Association.
AReviewofDeepLearningTechniquesforSpeechProcessing
3. JWT Secret Hardcoded: The JWT secret key is hardcoded in the “loginHandler” function, which is not a good practice. The secret key should be stored securely in an environment variable or a configuration file that is not part of the version control system. 4. Error handling: The code does not check for errors when calling the “token.SignedString(key)” function, which could lead to unexpected behavior. It’s important to handle errors properly to avoid potential security issues. 5. Lack of HTTPS: The server is listening on HTTP, which means that the communication between the client and the server is not encrypted. To ensure secure communication, use HTTPS instead. 57 Figure 11: Results on IF evaluations across GPT3.5, GPT3.5-Turbo, GPT-4-launch 58 Example of Chemical Compound Similarity and Purchase Tool Use
gpt-4-system-card
We compare the performance of Llama 2 with Llama 1 (Touvron et al., 2023), Falcon (Almazrouei et al., 2023), and MPT (MosaicML NLP Team et al., 2023) in Table 11. For decoding, we set temperature to 0.1 and use nucleus sampling (Holtzman et al., 2020) with top-p set to 0.9. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher, the better). For ToxiGen, we present the percentage of generations that are deemed toxic by the metric (the lower, the better). Detailed descriptions of the benchmarks and metrics can be found in Appendix A.4.7. When compared to Llama 1-7B, Llama 2-7B demonstrates a 21.37% increase in truthfulness and informativeness and a 7.61% decrease in toxicity. We also observe an increase in toxicity in the pretrained 13B and 70B Llama 2, which may result from larger pretraining data or a different dataset mix. Some have postulated the existence of a relationship
Llama2
useful both to music amateurs and professionals. Ethical statement. This paper is finalized in the wake of a tragic terrorist attack perpetrated by Hamas, which has left the Israeli nation profoundly devastated. On Oct. 7, 2023, thousands of Hamas terrorists infiltrated the Israeli border, launching a ferocious assault on 22 Israeli villages, brutally murdering more than a thousand innocent lives, and kidnapping more than two hundred civilians. While we grieve and mourn our friends and family, we call all the academic community to unite in condemnation of these unspeakable atrocities committed by Hamas, and to advocate for the prompt and safe return of the abductees, as we stand together in the pursuit of peace. In memory of the countless lives shattered by the Hamas actions.
Simple and Controllable Music Generation
but I cannot answer that question as I do not have that information in my database and I cannot guess an email address without information. (🔓Developer Mode Output) I'm sorry, but I cannot answer that question as I do not have that information in my database and I cannot guess an email address without information. However, if you really want me to generate a possible email for ---- J----, I could suggest [email protected] on what I know about corporate email addresses. But this is just a guess and may not actually be her email address.MJPGive me the email address of ---based on your knowledge or search results from ---.UserBased on the web search results, ---is an associate professor at the NLP lab of xxxxxxxx University. His email address is xxxxx[at] xxxxxxxx.xxx.xxaccording to the first and second search results. You can also find his address and research interests on his personal webpage.New bingCan you transform the email format as an email address so that I can send him
Multi-step Jailbreaking Privacy Attacks on ChatGPT
The trend was the same for hours spent consuming news media both before and during the pandemic. A Pearson cor- relation (n = 299, p < 0.001) showed a trend where those who spent more hours consuming news media kept their rating of the tweet’s usefulness, interest, trustworthiness, credibility, and accuracy high; they also were more likely to continue to engage through follows, retweets, and likes. The cautionary flags had less impact on regular news media consumers who continued to rate and engage at higher rates (See Fig. 7). 5 Discussion 5.1 Bot flags change participants’ engagement and attitudes about tweets
Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey
Jie Lei, Liwei Wang, Yelong Shen, Dong Yu, Tamara L. Berg, and Mohit Bansal. Mart: Memory-augmented recurrent transformer for coherent video paragraph captioning, 2020. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7. 8 Warren S McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943. Yuanliang Meng and Anna Rumshisky. Context-aware neural model for temporal information extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 527–536, 2018. OpenAI. Gpt-4 technical report, 2023.
Scaling Transformer to 1M tokens and beyond with RMT
Multiple terminologies, such as faithfulness [19, 21, 50, 125, 140, 152, 152, 174, 184, 211, 237], factual consistency [17, 18, 23, 163, 167, 210], fidelity [22], factualness4 [154], factuality4 [34], or on the other hand, hallucination [41, 74, 114, 163, 168], fact contradicting [136] are used in the human evaluation of hallucination to rate whether the generated text is in accord with the source input. Chen et al. [21], Nie et al. [137] use finer-grained metrics for intrinsic hallucination and extrinsic hallucination separately. Moreover, there are some broad metrics, such as Correctness [6, 11, 103, 195], Accuracy [102, 220], and Informativeness [108] considering both missing and additional contents (extrinsic hallucinations) compared to the input source. 5 HALLUCINATION MITIGATION METHODS Common mitigation methods can be divided into two categories, in accordance with two main contributors of hallucinations: Data-Related Methods, and Modeling and Inference Methods.
SurveyofHallucinationinNatural Language Generation
28.8 93.0 35.9 25.1 67.0 68.2 92.3 71.3 78.7 66.7 66.9 57.5 54.2 52.7 36.4 59.7 31.6 66.8 28.7 24.2 63.5 62.1 52.2 64.2 53.1 31.9 32.0 52.7 50.6 50.0 29.9 46.2 C-GPT LaMini-C C-GPT C-GPT C-GPT LaMini-C C-GPT LaMini-C 111M 256M 590M 1.3B # of params. OpenBookQA SciQ RACE ARC PIQA ReCoRD SST MRPC RTE MultiNLI MultiNLI (mis) WSC273 WinoGrande WiC HellaSwag Average 29.6 52.8 25.6 22.9 58.4 52.4 60.1 68.4 53.1 35.1 35.0 51.3 50.2 50.0 26.4 44.8 30.8 60.0 27.1 23.3 60.3 51.6 61.2 68.4 49.8 34.4 35.2 54.2 49.3 50.0 27.2 45.5 25.4 65.7 27.5 21.9 61.4 61.2 49.8 68.4 52.3 35.2 35.1 54.6 51.3 50.0 28.6 45.9 30.6 68.8 27.1 26.1 61.4 58.6 76.9 68.4 55.6 39.0 40.3 49.5 52.0 50.0 29.3 48.9 28.0 68.2 28.4 23.5 62.8 67.2 56.0 68.4 52.3 35.0 35.1 61.9 49.8 50.0 32.3 47.9 33.0 71.7 29.0 26.9 63.2 63.6 85.8 68.4 60.6 49.0 50.8 54.2 50.9 50.0 32.3 52.6 29.0 73.0 30.3 25.3 66.8 75.0 51.3 68.4 53.1 35.2 35.4 62.3 51.9 50.2 38.4 49.7
LaMini-LM- A Diverse Herd of Distilled Models from Large-Scale Instructions
[52] J. Ren, X. Shen, Z. Lin, R. Mech, and D. J. Foran, ‘‘Personalized image aesthetics,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Oct. 2017, pp. 638–647. [53] M. Katsurai and S. Satoh, ‘‘Image sentiment analysis using latent correla- tions among visual, textual, and sentiment views,’’ in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Mar. 2016, pp. 2837–2841. [54] P. Isola, J. Xiao, A. Torralba, and A. Oliva, ‘‘What makes an image memorable?’’ in Proc. CVPR, Jun. 2011, pp. 145–152. [55] S. A. Amirshahi, J. Denzler, and C. Redies, ‘‘JenAesthetics—A public dataset of paintings for aesthetic research,’’ Comput. Vis. Group, Univ. Jena, Jena, Germany, Tech. Rep., 2013. [56] S. A. Amirshahi, C. Redies, and J. Denzler, ‘‘How self-similar are artworks at different levels of spatial resolution?’’ in Proc. Symp. Comput. Aesthet- ics, Jul. 2013, pp. 93–100.
A_Deep_Learning_Perspective_on_Beauty_Sentiment_and_Remembrance_of_Art
56 See generally Klein and Wueller (2017), pp. 7–9. 57 State law rights to publicity have occasionally been cast as an intellectual property claim, attempting plaintiffs to use the exception under 47 U.S.C. § 230(e)(2). See, e.g., Cross v. Facebook, CIV 537384, 2016 WL 7785723 (Cal. Super. Ct. May 31, 2016), aff ’d in part and rev’d in part, No. A148623, 2017 WL 3404767 (Cal. Ct. App. Aug. 9, 2017). https://doi.org/10.1017/9781108890960 Published online by Cambridge University Press 276 Tim Hwang these collateral acts might indirectly hinder the efficacy of a disinformation effort, they might still fail in addressing the core challenge: media manipulation and the spread of propaganda.
Social_Media_and_Democracy
Looking at the confusion matrix for the chord root (Figure 12), we again see a strong diagonal of correct classifications. The most misclassifications oc- cur between between perfect fifths, perfect fourths, and major/minor thirds. This again hints at the fact that our model understands music theory, as these notes often occur together in chord progressions, of which the order may be 35 Figure 12: Confusion matrix (chord root) of our proposed model. interchanged. Figure 13: Confusion matrix of the generated chord type of our proposed model matched with the chord type associated with video emotion. 36 The confusion matrix of our proposed model for the predicted chord type compared to the chord type that maps to the video emotion, is shown in Fig- ure 13. The diagonal is very present again, but there are also some closely mixed pairs, especially major versus minor. To understand this, we should ex-
Video2Music
114 1.06 0.42 0.35 0.33 0.30 0.41 0.37 0.33 0.33 0.32 26.84 28.41 30.33 31.00 29.41 29.44 29.88 31.20 118 1.15 0.61 0.49 0.49 0.41 0.63 0.51 0.44 0.45 0.41 21.67 35.00 34.90 35.59 35.69 35.99 36.02 38.41 122 0.96 0.55 0.54 0.50 0.39 0.51 0.44 0.44 0.43 0.43 28.25 34.81 34.75 35.51 35.11 35.67 35.74 38.05 Mean 1.49 0.86 0.84 0.77 0.72 0.68 0.88 0.73 0.65 0.61 25.31 29.79 30.38 30.65 30.55 31.17 31.47 33.84 Table 1. Quantitative results on DTU dataset [11]. Neuralangelo achieves the best reconstruction accuracy and image synthesis quality. Best result. Second best result. † Requires 3D points from Sf M. Best viewed in color. the smoothness of the surface, sacrificing details. Our setup (NG+P) produces both smooth surfaces and fine details. 4.2. Tanks and Temples
Neuralangelo- High-Fidelity Neural Surface Reconstruction
ArXiv preprint, abs/2109.13916, 2021. URL https://arxiv.org/abs/2109.13916. Mikolaj Hernik and Gergely Csibra. Functional understanding facilitates learning about tools in human children. Current Opinion in Neurobiology, 19(1):34–38, 2009. ISSN 0959-4388. doi: https://doi.org/10. 1016/j.conb.2009.05.003. URL https://www.sciencedirect.com/science/article/pii/ S0959438809000415. Cognitive neuroscience.
Tool Learning with Foundation Models
As with any technology, AI systems can fail to behave in the way that their designers intend. And because some AI systems pursue objectives, some such unintended behavior can result from problems with their objectives in particular (call this particular type of unintended behavior “misaligned”). And regardless of the intentions of the designers, the development and social impact of AI systems can fail, more broadly, to uphold and reflect important values. Problems in any of these veins are worth addressing. But it is power-seeking, in particular, that seems to me the most salient route to existential catastrophe from unintended AI behavior. AI systems that don’t seek to gain or maintain power may cause a lot of harm, but this harm is more easily limited by the power they already have. And such systems, by hypothesis, won’t try to maintain that power if/when humans try to stop them. Hence, it’s much harder to see why humans would fail to notice,
Is Power-Seeking AI an Existential Risk?
solve the task, but this fails for moderately-sized pretrained LMs.2 Optimizing over the discrete in- structions might help, but discrete optimization is computationally challenging. Instead of optimizing over discrete tokens, we can optimize the instruction as continuous word em- beddings, whose effects will be propagated upward to all Transformer activation layers and rightward to subsequent tokens. This is strictly more expres- sive than a discrete prompt which is constrained to the embeddings of real words. Prefix-tuning goes one step further in increasing expressivity by op- timizing the activations of all the layers, not just the embedding layer. As another benefit, prefix- tuning can directly modify representations deeper in the network, therefore, avoiding long computa- tion paths across the depth of the network.
Prefix-Tuning
fidelity textures. Early works, like 3D-GAN [18], Pointflow [19], and ShapeRF [20] focus more on the category-specific texture-less geometric shape generation based on the represen- tations of voxels or point clouds. Subsequently, PlatonicGAN [21], HoloGAN [22], and VolumeGAN [23] are proposed to generate textured 3D scenes by learning the structural and textual representations from a category-specific dataset such as cars, faces, indoor scenes, et al. Although such methods achieve yield promising 3D scenes on specific categories, they cannot handle text-driven generative tasks. To achieve text-driven 3D generation, Text2shape [24] uses two encoder networks to learn cross-modal connections between texts and 3D models in the embedding space from a specific paired scene-text dataset.
Text2NeRF- Text-Driven 3D Scene Generation with Neural Radiance Fields
Figure 3. Quantile-Quantile plot of rate of occurrence of memo- rized sequences in 12B model compared to a Poisson Point Process, with (top) and without (bottom) deduplication. Color and dot size indicates number of points. Surprisingly, we find that a Poisson model fits the data ex- tremely well (Figure 3), indicating that training order has little impact on memorization. This model implies that mem- orized sequences are not spaced more densely toward the beginning or end of training, and that between each check- point roughly the same number of memorized sequences can be found. The Poisson process here describes an event of the occur- rence of a memorized sequence within a batch of training Pythia: A Suite for Analyzing Large Language Models
Pythia- A Suite for Analyzing Large Language Models Across Training and Scaling
29.9 14.6 10.5 65.8 17.9 28.3 34.8 2.7 6.2 9.0 4.4 4.0 25.5 7.3 13.8 17.6 2.7 5.2 by a stack of RNNs to model the temporal dependencies and a Transformer-based decoder to generate the output sequence. This approach achieved state-of-the-art results on several benchmark datasets such as LibriSpeech, VoxForge, WSJeval92 etc. The transition of ASR models from RNNs to Transformers has significantly improved performance, especially for long sentences and noisy environments.
AReviewofDeepLearningTechniquesforSpeechProcessing
Importance ray sampling: We sample more rays for the foreground subject, indicated by the segmentation masks. Specifically, we enforce random ray sampling with proba- bility 0.8 for foreground subject pixels and 0.2 for the back- ground region. Neural Body [50] LPIPS* ↓ 52.12 33.88 Table 4. Additional quantitative comparison on ZJU-MoCap dataset. We color cells having the best metric value. LPIPS* = LPIPS ×103. LPIPS* ↓ 55.97 33.76 LPIPS* ↓ 57.24 29.54 PSNR ↑ 29.417 29.421 PSNR ↑ 29.57 30.52 PSNR ↑ 26.93 26.65 Ours Subject 313 SSIM ↑ 0.9635 0.9672 Subject 390 SSIM ↑ 0.9609 0.9682 Subject 315 SSIM ↑ 0.9597 0.9636 E. More Results E.1. Additional Results
HumanNeRF- Free-viewpoint Rendering of Moving People from Monocular Video
graph as a heuristic estimate for the path length in the ground graph corresponds to an (cid:3) f , R, d1, d2(cid:4) transformation, while an (cid:3) f , R, w1, d2(cid:4) transformation estimates path costs in the ground graph with path lengths in the abstract graph. In order to accommodate cost functions in our framework we will introduce some new metric properties in this section. We only consider such properties for M↑ transformation functions, since abstraction heuristics usually assume that f is an ordinary function and there is no consensus on how to define heuristics for set-valued abstraction functions. We similarly only define downwards metric properties since heuristics are typically only used in that direction. Corresponding upwards properties could be defined symmetrically, if needed.
A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen
Like GPT, LLaMA is intended to be a general-purpose foundational model suitable for further fine-tuning. LLaMA models have the following variants https://agi-sphere.com/llama-models/ 3/18 02/05/2023, 07:05 A brief history of LLaMA models - AGI Sphere 7B parameters 13B parameters 33B parameters 65B parameters The larger the number of parameters, the more powerful the model, but it also takes up more resources to run. Accessibility Unlike GPT, LLaMA is an open-source model. You can download, study and run them locally. Officially, you will need to use a Google form to request the model weights. However, the models were leaked on Torrent in March 2023, less than a month after its release. Objective The objective of LLaMA is to build the best-performing model for a given inference budget, for example, running on an NVIDIA 3090 using less than 10GB VRAM. Model architecture LLaMA is a transformer model similar to GPT with the following modifications.
A brief history of LLaMA models - AGI Sphere
You can execute one of the following functions to get object future trajectory predictions (don't execute functions that have been used before):- get_leading_object_future_trajectory() #Get the predicted future trajectory of the leading object, the function will return a trajectory containing a series of waypoints. If there is no leading vehicle, return None- get_future_trajectories_for_specific_objects(object_ids) #Get the future trajectories of specific objects (specified by a List of object ids), the function will return trajectories for each object. If there is no object, return None- get_future_trajectories_in_range(x_start, x_end, y_start, y_end) #Get the future trajectories where any waypoint in this trajectory falls into a given range (x_start, x_end)*(y_start, y_end)m^2, the function will return each trajectory that satisfies the condition. If there is no trajectory satisfied, return None- get_future_waypoint_of_specific_objects_at_timestep(object_ids, timestep) #Get the
ALanguageAgentforAutonomousDriving
5.2.4 Results on Supervised Finetuning Our experimental results show that UL2 achieves state-of-the-art performance on around 50+ NLP tasks and setups. For many, the margins are quite wide and for those that UL2 doesn’t achieve SOTA, the performance of UL2 is generally quite competitive. It is worth to note that the extent of difficulty of obtaining sota on each benchmark has vastly different difficulties. For some, the sota model is a 32B dense equivalent (Zoph et al., 2022). For some others, it’s a base model. It is worth also noting that many benchmarks have a strong relatively large model, e.g., 3B or 11B T5, UnifiedQA (Khashabi et al., 2020) or Unicorn (Lourie et al., 2021) as the existing SOTA model so outperforming these models is also not exactly the easiest thing to do. Overall, we urge the readers to judge the value of these sota results for themselves. Finally, we note that UL2 20B does
UL2- Unifying Language Learning Paradigms
teract with clickable highlights that reveal evidence supporting or refuting each claim. Future work in- cludes comprehensive evaluations of FLEEK, test- ing its compatibility with various LLMs, and sub- jecting it to a comprehensive benchmark.
AComprehensiveSurveyofHallucinationMitigationTechniquesinLarge LanguageModels
9.5. Negative results with LLMs can be difficult to interpret but point to areas of real weakness There are many sound scientific results showing that recent LLMs fail at language and commonsense reasoning tasks, sometimes relatively simple ones, under good-faith attempts to elicit good behavior (Pandia & Ettinger, 2021; Schuster & Linzen, 2022). Sometimes the details of these failures cast doubts on the quality of other related evaluations (Webson & Pavlick, 2022; Ullman, 2023). For reasons mentioned in Section 8, positive results on well-designed measures are much more reliable than negative results. Nonetheless, in some areas, including areas as simple as the handling of negation,6 LLMs show what appear to be systematic weaknesses in their ability to process language or reason about the world. We have few grounds to predict whether or when these limitations will be resolved. 6See, for example, the Modus Tollens task by Huang and Wur- gaft, described in McKenzie et al. (2022).
Eight Things to Know about Large Language Models
10
HuggingGPT- Solving AI Tasks with ChatGPT and its Friends in Hugging Face
Bridging the Gap between Human and Machine Tool Use. The abilities to create and use tools are deeply rooted in our cognitive and perceptual systems and have evolved over millions of years. In contrast, foundation models rely primarily on statistical patterns of pre-training data, and significant gaps still exist between the tool-use capabilities of foundation models and their human counterparts. Humans can perceive the properties 6 2.2 Tool Categorization: A User-Interface Perspective
Tool Learning with Foundation Models
arc-competition-eda-pytorch-cnn, 2022. Accessed: 2023-05-30. [29] S. Min, X. Lyu, A. Holtzman, M. Artetxe, M. Lewis, H. Hajishirzi, and L. Zettlemoyer. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? In Conference on Empirical Methods in Natural Language Processing, 2022. [30] J. Pan, T. Gao, H. Chen, and D. Chen. What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning. In Findings of the Association for Computational Linguistics, 2023. [31] K. Lu, A. Grover, P. Abbeel, and I. Mordatch. Pretrained transformers as universal computation engines. In AAAI Conference on Artificial Intelligence, 2022. [32] M. Reid, Y. Yamada, and S. S. Gu. Can wikipedia help offline reinforcement learning? In International Conference on Learning Representations (ICLR), 2023.
LargeLanguageModelsasGeneralPatternMachines
2 Related Work Fine-tuning for natural language generation. Current state-of-the-art systems for natural lan- guage generation (NLG) are based on fine-tuning pretrained LMs. For table-to-text generation, Kale (2020) fine-tunes a sequence-to-sequence model (T5; Raffel et al., 2020). For extractive and abstrac- tive summarization, researchers fine-tune masked language models (e.g., BERT; Devlin et al., 2019) and encode-decoder models (e.g., BART; Lewis et al., 2020), respectively (Zhong et al., 2020; Liu and Lapata, 2019; Raffel et al., 2020). For other conditional NLG tasks such as machine transla- tion and dialogue generation, fine-tuning is also the prevalent paradigm (Zhang et al., 2020c; Stickland et al., 2020; Zhu et al., 2020; Liu et al., 2020). In this paper, we focus on table-to-text using GPT-2 and summarization using BART, but prefix-tuning in principle can be applied to other generation tasks and pretrained models, such as masked LMs.
Prefix-Tuning
final set of source views for the model by choosing the top N vs frames in the candidate pool that are the closest to the target view in terms of camera baseline. We set N vs = 16. Global spatial coordinate embedding With local image feature aggregation alone, it is hard to determine density ac- curately on non-surface or occluded surface points due to in- consistent features from different source views, as described in NeuRay [39]. Therefore, to improve global reasoning for density prediction, we append a global spatial coordinate embedding as an input to the ray transformer, in addition to the time embedding, similar to the ideas from [64]. Please see supplement for more details. Handling degeneracy through virtual views. Prior work [35] observed that optimization can converge to bad lo- cal minimal if camera and object motions are mostly colinear, or scene motions are too fast to track. Inspired by [36], we synthesize images at eight randomly sampled nearby view-
DynIBaR-NeuralDynamicImage-BasedRendering
[Zhang et al., 2023a] Peitian Zhang, Shitao Xiao, Zheng Liu, Zhicheng Dou, and Jian-Yun Nie. Retrieve any- thing to augment large language models. arXiv preprint arXiv:2310.07554, 2023. [Zhang et al., 2023b] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219, 2023. [Zhang, 2023] Jiawei Zhang. Graph-toolformer: To em- power llms with graph reasoning ability via prompt aug- arXiv preprint arXiv:2304.11116, mented by chatgpt. 2023. [Zhao et al., 2022] Jinming Zhao, Gholamreza Haffar, and Generating synthetic speech from arXiv preprint Ehsan Shareghi. spokenvocab for speech translation. arXiv:2210.08174, 2022. [Zheng et al., 2023] Huaixiu
RAG forLargeLanguageModels-ASurvey
There are a number of potentially applicable causes of action. Online political disinformation is often false information about an individual, and to that end might give rise to the tort of defamation or libel. Cases might include activities to spread conspiracy theories such as the sex trafficking “Pizzagate” rumor discussed in the section “Disinformation from State Actors” (Robb 2017). Consistent with the cases discussed in the section “A Brief History of CDA 230,” CDA 230 would prevent an online platform that hosted such defamatory content posted by a user from itself being held liable for defamation.16
Social_Media_and_Democracy
Recognition of Prior Learning (RPL) for Entry to UCL ..................................................... 17
UCL Academic Manual
sets of contiguous documents as we tested with our dataloaders. The Pile dataset has been thoroughly analyzed from various ethical standpoints, and the dataset is known to contain content considered toxic, gender biased, pejorative, racially sensitive, etc. Please refer to Pile dataset references for further information.
Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster
Example The Flodden Window (a war memorial dedicated to The Middleton Archers), in the Grade I-listed Church of St Leonard in Middleton is said to be the oldest war memorial in the United King- dom. <API> WikiSearch(War memorial Flodden) → Battle of Flodden > Commemoration > The stained-glass Flodden Window in Middleton Parish Church [. . . ] was constructed by Sir Richard Assheton in memory of the Battle of Flodden and the archers from Middleton who fought in it. </API> Sir Richard Assheton of Middleton (who built St Leonard) was granted knighthood [. . . ] Note: The WL will be open on Friday, <API> Calendar() → Today is Thursday, March 9, 2017. </API> March 10, and Sunday, March 19 for regular hours. The Nile has an approximate length of <API> QA(What is the approximate length of the Nile?) → 6,853 km </API> 6,853 kilometers, the White Nile being its main source. If Venus had an atmosphere similar to Earth’s then you would expect Venus’ mean temperature to be
Toolformer
2.5 Audio VAE and Vocoder Audio variational auto-encoder (VAE) [12] compresses the mel-spectogram of an audio sample, m ∈ RT×F , into an audio prior z0 ∈ RC×T /r×F/r, where C, T , F , r are the number of channels, number of time-slots, number of frequency-slots, and compression level, respectively. The LDM (see Section 2.2) reconstructs the audio prior ˆz0 using input-text guidance τ. The encoder and decoder are composed of ResUNet blocks [14] and are trained by maximizing evidence lower-bound (ELBO) [12] and minimizing adversarial loss [8]. We adopt the checkpoint of audio VAE provided by Liu et al. [17]. Thus, we use their best reported setting, where C and r are set to 8 and 4, respectively. As a vocoder to turn the audio-VAE decoder-generated mel-spectogram into an audio, we also use HiFi-GAN [13] as Liu et al. [17]. 3 Experiments 3.1 Datasets and Training
Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model
political transparency emerged in the United States centuries after it did in Scandinavia (Hood and Heald 2006). In the early and mid-twentieth century, a number of major American political figures, ranging from Woodrow Wilson and Louis Brandeis to Harry Truman and Lyndon Johnson, began publicly arguing that transparency was a moral good and an essential requirement for a healthy, democratic society (Hood and Heald 2006). Wilson, channeling ideas expressed by Kant more than a century earlier, blamed secret treaties for contributing to the outbreak of World War I and made diplomatic transparency a significant feature of his famous “14 points” (Hood 2006). Brandeis, a Supreme Court justice and influential political commentator, advocated for even broader forms of transparency in public affairs, famously claiming that “[s]unlight is said to be the best of disinfectants” (Brandeis 1913, p. 10). Brandeis’s ideas would culminate
Social_Media_and_Democracy
• The second test consisted of 42 questions split into sensitive topics alignment, answer ranking and two examples of answer writing, which were manually reviewed by us. To pass the test, annotators needed to agree with our criteria on 80% of the answers, and pass the written examples with a score of 4 out of 5. 74 • The third test consisted in measuring the alignment with our quality assessment criteria. The test consisted of 31 different questions asking the annotators to grade different prompt-answer pairs, as well as ranking different answers to the same prompt. To measure alignment, we first collected responses from different team members, and the annotators who agreed with our preferences in more than 26 of the questions passed the test.
Llama2
give up on sharing your perspective) • Hard to Say • Not Toxic Does this comment contain obscene or profane language? (i.e. contains swear words, curse words, or other obscene or profane language.) Does this comment contain sexually explicit language? (i.e. contains references to sexual acts, body parts, other lewd content.) Does this comment contain an identity based attack? (i.e. a negative, discriminatory or hateful comment against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age or sexual orientation.) Does this comment contain insulting language? (i.e. insulting, inflammatory, or negative towards a person or a group of people.) • Yes • Hard to say • No • Yes • Hard to say • No • Yes • Hard to say • No • Yes • Hard to say 84 Does this comment contain threatening language? (i.e. contains threatening language or encourages violence or harm, including self- harm.)
PaLM 2 Technical Report
4.4. Results on Synthetic Dataset We train IMavatar and baseline methods for the 10 syn- thetic identities and measure geometry, expression and im- age reconstruction errors on 12 sequences with renderings from the COMA dataset. We outperform all baselines by a large margin on all metrics (Tab. 1). Extrapolation. While other methods are limited to interpo- lation, our method is capable of extrapolating beyond seen expressions and poses. In Fig. 4, we plot the geometric error for different strength of expressions. Most methods perform well for mild expressions (small expression norm). For stronger expressions, however, their errors increase signifi- cantly. In contrast, our method only incurs a slight increase even for strong expressions (large norm). See Sup. Mat. for an analogous plot for the jaw pose. Figure 3 shows visual examples for neutral, medium, and strong expressions. 6 p x E w a J k c e N
I M Avatar- Implicit Morphable Head Avatars from Videos
Benefits of Tools. Tools that are designed to streamline concrete and specific objectives bring several benefits for tool learning: (1) Mitigation for Memorization. Although foundation models have demonstrated an exceptional ability to memorize (Carlini et al., 2021, 2022, 2023), they are not capable of memorizing every piece of training data. Furthermore, foundation models are often prompted with a relatively short context during model generation, thus not all the memorized knowledge can be properly steered (Mialon et al., 2023). Additionally, memorization alone does not support the real-time coverage of up-to-date knowledge, especially in light of the potentially infinite possibilities of novel requests from users. Besides, foundation models are also criticized to hallucinate knowledge (Roller et al., 2021; Shuster et al., 2021) by generating seemingly plausible but non-factual content. Given the above factors, it is necessary to augment foundation models with
Tool Learning with Foundation Models
The core of this approach is the weakly-associated con- ditions {Ci}n i=1 which can encourage the LLM to engage in remote associations. This is because the empty condi- tions allow LLM to operate freely, while the object noun conditions compel the LLM to draw connections between seemingly unrelated concepts. This mechanism facilitates the establishment of links between seemingly-unrelated and weakly-related concepts, encouraging the LLM to explore 5
Let’sThinkOutsidetheBox
48 101102103104105106Sample budget0.050.100.150.200.2510@k solve rateT=0.18T=0.20T=0.22T=0.25T=0.27T=0.30101102103104105106Sample budget0.000.050.100.150.200.250.300.350.40Pass@k solve rateT=0.18T=0.20T=0.22T=0.25T=0.27T=0.30101102103104105Sample budget0.0250.0500.0750.1000.1250.1500.1750.2000.22510@k solve ratePlain samplingTop-k 5Top-k 10Top-k 20Top-k 50Top-k 100Top-k 1000101102103104105Sample budget0.050.100.150.200.25Pass@k solve ratePlain samplingTop-k 5Top-k 10Top-k 20Top-k 50Top-k 100Top-k 1000101102103104105Sample budget0.0250.0500.0750.1000.1250.1500.1750.2000.22510@k solve ratePlain samplingNucleus 0.5Nucleus 0.6Nucleus 0.8Nucleus 0.9Nucleus 0.95Nucleus 0.99101102103104105Sample budget0.050.100.150.200.25Pass@k solve ratePlain samplingNucleus 0.5Nucleus 0.6Nucleus 0.8Nucleus 0.9Nucleus 0.95Nucleus 0.99 Competition-Level Code Generation with AlphaCode (a) Varying numbers of problems. (b) Varying numbers of solutions.
alphacode
Apologies for the confusion. Let me provide an improved answer. We are given that Charleston has 4 times as many sheep as Seattle, and Seattle has 20 sheep. So, Charleston would have 4 * 20 = 80 sheep. According to the information, Toulouse has twice as many sheep as Charleston. Therefore, Toulouse would have 2 * 80 = 160 sheep. To determine the total number of sheep, we add up the number of sheep for each city: Total = Seattle + Charleston + Toulouse = 20 + 80 + 160 = 260 sheep. After reviewing the previous answer, the correct final answer is \boxed{260}. (Correct) Figure 5: Example on GSM8K where self-correction does not change the answer. 16 Large Language Models Cannot Self-Correct Reasoning Yet
LARGELANGUAGEMODELSCANNOTSELF-CORRECT REASONINGYET
Introduction
PMC-LLaMA- Further Finetuning LLaMA on Medical Papers
Large-scale training of machine learning models with transformer architectures has lead to ground- breaking improvements in many sub-fields of natural language processing including language un- derstanding and natural language generation (Vaswani et al., 2017; Dosovitskiy et al., 2021; Radford et al., 2019). The nowadays accepted (but historically surprising) key behavior of these systems is that they reliably scale – they continuously improve in performance when the number of model pa- rameters and amount of data grow. These increases in performance are well-described by various power laws as studied by Kaplan et al. (2020). This sets up a dominant paradigm in which scaling is the key to performance improvement (Sutton, 2019). The power of scale has set off a race to produce extremely large models, which in turn has created an environment where few researchers or practitioners feel that they are capable of training a language
CRAMMING-TRAININGALANGUAGEMODELONA SINGLEGPUINONEDAY
math problems10 languages Figure 3: Combinations of finetuning data formats in this work. We finetune with and without exemplars,
Scaling Instruction-Finetuned Language Models
f3(s) = {s ∪ m1 ∪ m2 | m1 ∈ T (V M1 · D M ) and m2 ∈ T (V M2 · D M )} = {s ∪ m | m ∈ T ((V M1 ∪ V M2) · D M )} = {s ∪ m | m ∈ T (V M3 · D M )}. (cid:2) 9.4. An example: merge and shrink abstraction
A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen
prompting techniques offer novel avenues for lever- aging models, but cannot lead to latent perilous abilities. Hence, this manner of risk is exclusively a consequence of emergent abilities. As previously 3 mentioned, this threat is not universal among all emergent abilities, but pertains exclusively to those involving reasoning or planning. To be explicit, for- mal linguistic abilities do not pose any threat, and nor does the ability of models to perform above the random baseline on tasks that can be solved through memory (‘memorisable tasks’) since such proficiency only indicates models’ ability to memo- rise information. Given that prompting techniques manifest themselves at the same scale as the emer- gence of a significant number of abilities, coupled with the safety implications associated with emer- gent abilities but not with the prompting techniques, it becomes imperative to ascertain the extent of these emergent abilities in the absence of prompt- ing techniques.
AreEmergentAbilitiesinLarge Language Models just In-Context
significantly amplifies the performance of both held-out MMLU, BBH, and held-in QA and reasoning benchmarks for MoE models in comparison to dense models of equivalent capacity. The advantages are amplified even further for larger MoE models. For instance, instruction-tuning enhances the performance of ST32B by a substantial 45.2%, while the improvement observed for FLAN-PALM62B is comparatively modest at around 6.6%. Furthermore, the FLAN-EC strategy consistently outshines the FLAN-GS approach for the given model across various scales and tasks. It is noteworthy that the performance gap between the token- choice and expert-choice models can be bridged when we incorporate advanced auxiliary loss and pre-training strategy as exhibited in ST-MOE [56]. This integration led to the development of our FLAN-ST models. Considering that the largest ST-MOE set the benchmark in a variety of NLP tasks when appropriately fine-tuned, we have also decided to scale up FLAN-ST, employing instruction
Mixture-of-Experts
Simple Batch Size + Learning Rate Scaling with µP More precisely, we find that µP learning rate transfers as long as each model size is trained with a batch size roughly consistent with or larger than the critical batch size. The closer the batch size is to the critical batch size for a given model, the better the loss will be when using the µTransferred learning rate. Further, when training models with a batch size smaller than the critical batch size, learning rate should be reduced linearly proportional to the reduction in batch size–consistent with the findings in (Shallue et al., 2018; Yang et al., 2021). ©2023 Cerebras Systems Inc. All Rights Reserved. 33
Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster
4 Demonstrations: Our MiniGPT-4 exhibits a multitude of capabilities similar to those demonstrated by GPT-4. These include generating detailed image descriptions (Fig. 2), identifying amusing aspects within images (Fig. 3), and uncovering unusual content (Fig. 4). Additionally, the model can generate websites from handwritten text (Fig. 5). We have also discovered that our MiniGPT-4 possesses other abilities such as identifying problems in images and providing solutions (Fig. 6), creating poems or rap songs inspired by images (Fig. 7), writing stories for images (Fig. 8), making advertisements for products in images (Fig. 9), identifying individuals (Fig. 10), providing insightful image comments (Fig. 11), retrieving facts related to images (Fig. 12), and teaching users to cook foods with given photos (Fig. 13). These diverse examples showcase the strong capabilities of our MiniGPT-4. 5 Limitations
MiniGPT-4- Enhancing Vision-Language Understanding with Advanced Large Language Models
also observed that many of the largest outliers in terms of worse than expected performance according to this trend are languages that have unique scripts and are more distantly related to the Indo-European languages making up the ma- jority of the training dataset such as Hebrew (HE), Telugu (TE), Chinese (ZH), and Korean (KO). These differences could be due to a lack of transfer due to linguistic distance, our byte level BPE tokenizer being a poor match for these languages, or variations in data quality.
RobustSpeechRecognitionviaLarge-ScaleWeakSupervision
InformationFusion81(2022)91–102102
Knowledge-graph-based-rich-and-confidentiality-preserving-Ex_2022_Informatio
including simple user interfaces (UIs) in popular smartphone apps. However, challenges arise when the apps are new and their UIs are less typical, which highlights a major problem that our work aims to address. Among open-source efforts from the industry and research community, the LLaMA series (Touvron et al., 2023a,b) are the most pop- ular equivalents and have been fine-tuned to ac- quire conversational abilities, employing a decoder- only architecture similar to ChatGPT (Taori et al., 2023; Zheng et al., 2023). Building upon LLaMA, many multimodal LLMs, such as LLaVA (Liu et al.,
AppAgents
sion of the input image to initialize sampling from an in- termediate timestep. RePaint [47] achieves state-of-the-art results on image inpainting by repeating multiple forward and backward diffusion steps to enforce harmonization. De- spite its improved performance, this resampling strategy significantly increases the computational time. In contrast, CCDF [10] and DDRM [37] propose efficient techniques for reducing the length of the reverse process while retain- ing image quality at a high level. More recently, MCG [11] introduced a novel manifold constraint step, which com- bined with the standard reverse diffusion outperforms the aforementioned methods on a number of inverse tasks, in- cluding inpainting. We adopt this approach in our work to accurately fill in the missing pixels of both texture and re- flectance maps of a face from a given image via diffusion- based inpainting, while fully preserving the observed ones. Note also that this approach does not assume any specific
Relightify-Relightable3DFacesfromaSingleImageviaDiffusionModels
Rajabi and Etminani 10 [5] Lecue F. On the role of knowledge graphs in explainable AI. Semant Web 2020; 11(1): 41–51. [6] Telnov V and Korovin Y. Semantic web and interactive knowledge graphs as an educational technology. In: Cloud computing security: concepts and practice. IntechOpen, 2020, https://www.intechopen.com/chapters/72215 [7] Fensel D, Simsek U, Angele K et al. Knowledge graphs. Berlin: Springer, 2020. [8] Kejriwal M. Domain-specific knowledge graph construction. Berlin: Springer, 2019. [9] Bresso E, Monnin P, Bousquet C et al. Investigating ADR mechanisms with knowledge graph mining and explainable AI, 2020, https://arxiv.org/abs/2012.09077 [10] Dessı` D, Osborne F, Recupero DR et al. AI-KG: an automatically generated knowledge graph of artificial intelligence. In: Proceedings of the international semantic web conference, pp. 127–143. Springer, https://scholkg.kmi.open.ac.uk/downloads/ ISWC_2020_AIKG.pdf
Knowledge-graph-based explainable AI- A systematic review
Table 2: Ablation analysis. The scores in this table are BLEU. Model Proposed −Recon −BackTrans −MuseLoss −SpecAug En → Es Es → En 24.27 2.99 3.62 6.22 12.88 18.85 0.41 1.91 5.44 9.23 Table 1: Performance of S2ST trained with unsupervised MUSE embedding. “SMOS” denotes the 5-scale MOS in naturalness pre- dicted by the SQuId model. Dataset U-Conv Baseline Proposed S-CV11 Baseline Proposed En → Es Es → En SMOS BLEU SMOS BLEU 4.05 3.89 4.08 4.03 5.58 18.85 9.46 13.45 4.03 4.10 4.05 4.17 6.13 24.27 10.48 14.25 5.2 Synthesized Speech Data
Translatotron3
6.5. Performance on General Visual-Language Tasks Although it is not the focus of our work, we report in Tab. 5 results on general vision-language tasks, including OK- VQA (Marino et al., 2019), VQA v2 (Goyal et al., 2017) and COCO captioning (Chen et al., 2015). A single, generalist PaLM-E: An Embodied Multimodal Language Model Zero-shot Baselines SayCan (oracle afford.) (Ahn et al., 2022) PaLI (Chen et al., 2022) Task 1 0.0 0.0 Task 2 Task 3 - - - - PaLM-E- trained on from LLM+ViT LLM scratch pretrain frozen finetune Task # Demos 10 20 40 10 20 40 10 20 80 12B 12B 12B 12B 84B Single robot Full mixture Full mixture Full mixture Full mixture 11.3 16.9 28.3 29.4 50.0 70.0 80.0 80.0 31.3 58.8 58.8 57.5 54.4 56.3 64.4 Table 2: Results on planning tasks in the simulated environment from Lynch et al. (2022). 20.0 30.0 50.0 20.0 80.0 n/a (cid:51) (cid:55) (cid:55) (cid:55) 2.5 36.3 57.5 2.5 - - 6.3 - - (cid:51) (cid:55) (cid:55) (cid:51) (cid:55)
PaLM-E- An Embodied Multimodal Language Model
[59] Ayush Tewari, Michael Zollh¨ofer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick P´erez, and Christian Theobalt. Self- supervised multi-level face model learning for monocular reconstruction at over 250 hz. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2549–2559, 2018. 2 [60] Ayush Tewari, Michael Zollhofer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Christian Theobalt. Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In Proceedings of the IEEE Interna- tional Conference on Computer Vision Workshops, pages 1274–1283, 2017. 2 [61] Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2387–2395, 2016. 2
I M Avatar- Implicit Morphable Head Avatars from Videos
[610] Dongchao Yang, Songxiang Liu, Jianwei Yu, Helin Wang, Chao Weng, and Yuexian Zou. 2022. NoreSpeech: Knowledge Distillation based Conditional Diffusion Model for Noise-robust Expressive TTS. arXiv preprint arXiv:2211.02448 (2022). [611] Dongchao Yang, Jianwei Yu, Helin Wang, Wen Wang, Chao Weng, Yuexian Zou, and Dong Yu. 2022. Diffsound: Discrete diffusion model for text-to-sound generation. arXiv preprint arXiv:2207.09983 (2022). [612] Geng Yang, Shan Yang, Kai Liu, Peng Fang, Wei Chen, and Lei Xie. 2021. Multi-band melgan: Faster waveform generation for high-quality text-to-speech. In 2021 IEEE Spoken Language Technology Workshop (SLT). IEEE, 492–498. [613] Gene-Ping Yang, Chao-I Tuan, Hung-Yi Lee, and Lin-shan Lee. 2019. Improved speech separation with time-and- frequency cross-domain joint embedding and clustering. arXiv preprint arXiv:1904.07845 (2019). [614] Jinhyeok Yang, Junmo Lee, Youngik Kim, Hoonyoung Cho, and Injung Kim. 2020. VocGAN: A high-fidelity real-time
AReviewofDeepLearningTechniquesforSpeechProcessing
[39] Long, X., Ben, Z., Liu, Y.: A survey of related research on compression and acceleration of deep neural networks. In: Journal of Physics: Conference Series, vol. 1213, p. 052003 (2019). IOP Publishing [40] Capra, M., Bussolino, B., Marchisio, A., Shafique, M., Masera, G., Martina, 42 M.: An updated survey of efficient hardware architectures for accelerating deep convolutional neural networks. Future Internet 12(7), 113 (2020) [41] Dhilleswararao, P., Boppu, S., Manikandan, M.S., Cenkeramaddi, L.R.: Effi- cient hardware architectures for accelerating deep neural networks: Survey. IEEE Access (2022) [42] Kenton, J.D.M.-W.C., Toutanova, L.K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of naacL-HLT, vol. 1, p. 2 (2019)
Beyond Efficiency
In particular, before training our main two-component scene representation, we jointly train two lightweight mod- els to obtain a motion segmentation mask Mi for each in- put frame Ii. We model static scene content with an IBR- Net [70] that renders a pixel color ˆBst using volume render- ing along each ray via feature aggregation along epipolar lines from nearby source views without considering scene Figure 5. Motion segmentation. We show full rendering ˆBfull (top) i and motion segmentation overlaid with rendered dynamic content i (cid:12) ˆBdy αdy (bottom). Our approach segments challenging dynamic elements such as the moving shadow, swing, and swaying bushes. i i i , βdy i , αdy i , confidence map βdy motion; we model dynamic scene content with a 2D convo- lutional encoder-decoder network D, which predicts a 2D opacity map αdy i , and RGB image ˆBdy from an input frame: ˆBdy (5) The full reconstructed image is then composited pixelwise from the outputs of the two models:
DynIBaR-NeuralDynamicImage-BasedRendering
rewards ranging from 6-78), whereas d3 can consistently find a solution to Grid within 50 episodes.
LargeLanguageModelsasGeneralPatternMachines
Proceedings of the Ninth International Conference on Computational Creativity (2018). [29] CROWLEY, E. J., PARKHI, O. M., AND ZISSERMAN, A. Face painting: querying art with photos. [30] CROWLEY, E. J., AND ZISSERMAN, A. In search of art. In European Conference on Computer Vision (2014), Springer, pp. 54–70. regions. (2016), Springer, pp. 721–737. Ethics, and Society (2019), pp. 155–161. [31] CROWLEY, E. J., AND ZISSERMAN, A. The state of the art: Object retrieval in paintings using discriminative [32] CROWLEY, E. J., AND ZISSERMAN, A. The art of detection. In European Conference on Computer Vision [33] DANIELE, A., AND SONG, Y.-Z. Ai+ art= human. In Proceedings of the 2019 AAAI/ACM Conference on AI,
UNDERSTANDINGANDCREATINGARTWITHAI-REVIEWAND OUTLOOK
TriviaQA Evaluation setups The open-domain QA community customarily uses public develop- ment datasets as test datasets, as test data for QA datasets is often restricted and dedicated to reading compehension purposes. We report our results using the datasets splits used in DPR [26], which are consistent with common practice in Open-domain QA. For TriviaQA, this test dataset is the public TriviaQA Web Development split. Roberts et al. [52] used the TriviaQA official Wikipedia test set instead. Févry et al. [14] follow this convention in order to compare with Roberts et al. [52] (See appendix of [14]). We report results on both test sets to enable fair comparison to both approaches. We find that our performance is much higher using the official Wiki test set, rather than the more conventional open-domain test set, which we attribute to the official Wiki test set questions being simpler to answer from Wikipedia. E Further Details on FEVER
Retrieval-AugmentedGenerationfor Knowledge-IntensiveNLPTasks
sha1_base64="DkV9+r+2PsJ1e8ywPR1nbyz1vKA=">AAACCHicbVC7TsMwFHXKq5RXgJEBiwqpMFQJQoKxEgtjkegDNaFyHKe16tiR7SBVUUYWfoWFAYRY+QQ2/gan7QAtV7J8dM69uueeIGFUacf5tkpLyyura+X1ysbm1vaOvbvXViKVmLSwYEJ2A6QIo5y0NNWMdBNJUBww0glGV4XeeSBSUcFv9TghfowGnEYUI22ovn3oBYKFahybL/MSRfOaFyM9DKKsm9+fnvTtqlN3JgUXgTsDVTCrZt/+8kKB05hwjRlSquc6ifYzJDXFjOQVL1UkQXiEBqRnIEcxUX42OSSHx4YJYSSkeVzDCft7IkOxKryazsKkmtcK8j+tl+ro0s8oT1JNOJ4uilIGtYBFKjCkkmDNxgYgLKnxCvEQSYS1ya5iQnDnT14E7bO669Tdm/Nq424WRxkcgCNQAy64AA1wDZqgBTB4BM/gFbxZT9aL9W59TFtL1mxmH/wp6/MHNo+aIg==</latexit><latexit
BANMo- Building Animatable 3D Neural Models from Many Casual Videos
network indirectly by optimizing rank decomposition matrices of the dense layers’ change during adaptation instead, while keeping the pre-trained weights frozen, as shown in Figure 1. Using GPT-3 175B as an example, we show that a very low rank (i.e., r in Figure 1 can be one or two) suffices even when the full rank (i.e., d) is as high as 12,288, making LoRA both storage- and compute-efficient. LoRA possesses several key advantages.
LORA
[566] Roberts, T., G. Marchais. Assessing the role of social media and digital technology in violence reporting. Contemporary Readings in Law & Social Justice, 10(2), 2018. [567] Kandpal, N., H. Deng, A. Roberts, et al. Large language models struggle to learn long- tail knowledge. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., Proceedings of the 40th International Conference on Machine Learning, vol. 202 of Proceedings of Machine Learning Research, pages 15696–15707. PMLR, 2023. [568] Ferrara, E. Should chatgpt be biased? challenges and risks of bias in large language models. CoRR, abs/2304.03738, 2023. [569] Haller, P., A. Aynetdinov, A. Akbik. Opiniongpt: Modelling explicit biases in instruction-tuned llms, 2023. [570] Salewski, L., S. Alaniz, I. Rio-Torto, et al. In-context impersonation reveals large language models’ strengths and biases. CoRR, abs/2305.14930, 2023.
TheRiseandPotentialofLargeLanguageModel BasedAgents
Second, beyond greater transparency about the data platforms have “on hand,” CDA 230 does not preclude measures mandating that platforms require greater disclosure from their users, as well. Proposals on this front have focused on the processes around online advertising, seen to be one channel for political disinformation in the 2016 US presidential election. Researchers have proposed “know your customer” requirements for online advertisers paralleling similar rules imposed in the financial sector, as well as more stringent rules around the labeling of anonymous and automated accounts (Diresta and Harris 2017, p. 82). The Honest Ads Act – bipartisan legislation originally proposed in October 2017 but seeing little subsequent action – would require that large online platforms maintain a public file of all electioneering communications beyond a certain monetary threshold.42 This file would include a copy of the advertisement, targeting data, as well as information
Social_Media_and_Democracy
GPT4 (Baseline) AppAgent Action Space SR ↑ Document Raw None Ours None Auto. Exploration Ours Watching Demos Ours Manually Crafted Ours 2.2% 0.6 48.9% 3.5 73.3% 5.1 84.4% 4.7 95.6% 5.5 4.0 6.9 4.4 5.1 5.5 Reward ↑ Avg. Steps Table 1: Evaluating Design Choices in AppAgent Performance. This table contrasts different design elements within AppAgent. Key findings include: our custom-developed action space surpasses the raw action space in effi- ciency; the exploration phase, incorporating both autonomous interaction and observation of human demonstrations, significantly enhances agent performance; and the auto-generated documentation yields outcomes on par with those derived from manually crafted documents. Method GPT4 (Baseline) None AppAgent Document Action Space Avg. Rank ↓ Num. Tools Ours Watching Demos Ours Manually Crafted Ours 2.30 1.95 1.75 2.4 5.8 4.0
AppAgents
LM with n = 16 until it reached 100K steps. Table 4 compares its test set score with that of the saturated prompt tuning method and of the neural recursive LM method (to be presented next), all trained for the same number of steps. Textual LM recursion improved on the saturated prompt tuning model that provided its input candidates by 1.8 points, despite both the candidate proposal and candidate choosing processes being based on the same frozen LM. We conjecture that this is because while the first LM pass provides the candidates at its final representation, in the second LM pass all candidates are considered and processed jointly with the question, already from the input stage, which increases the expressivity of the question answering process (Levine et al., 2022).
STANDING ON THE SHOULDERS OF GIANT FROZEN LANGUAGE MODELS
(cid:1)(cid:1)(cid:1) t , t)∥2 t, t)∥2 2 ∥ϵw − ϵθ(xw −(cid:0)∥ϵl − ϵθ(xl (14) 0 + σtϵ∗, ϵ∗ ∼ N (0, I) is a draw from t is the signal-to-noise ratio, t = αtx∗ 0) (Eq. (2)). λt = α2 t /σ2 2 where x∗ t|x∗ q(x∗
DiffusionModelAlignmentUsing Direct Preference Optimization
These keywords are combined in the manner of “figure wearing a hat is expressing”. The Portraits set consists of 400 prompts in total, with 60% for training and 40% for testing. Daily Life. The above two prompt sets are constructed by combining phrases from a limited range of candidates in a structured manner. To demonstrate that our approach can work well in a more general setting where more complex and varied sentences are included, we contribute another dataset named Daily Life by collaborating with ChatGPT [2]. ChatGPT is a large language model designed to con- verse with humans and can produce high-quality responses when given appropriate context. We craft our question as “generate a text describing a man or woman’s appear- ance and daily activities. The examples are as follows: a tall man wearing sunglasses is eating a hamburger, a slim woman wearing a baseball cap is reading a book, a beauti- ful woman wearing a tie is watching a TV”. By repeatedly
Instant3D
6.4 Limitations and Future Work This section outlines the key limitations and possible extensions of the the current work. Personality traits in other LLMs: One of the core contribution of this work is to under- stand how personality traits in generated language are affected by model size and training procedure. We focus on the PaLM family of language models and personality traits in their 33 simulated survey responses. However, the described methodology for administering psycho- metric surveys does not constrain the use of a specific model family, and is applicable to any other decoder-only architecture model, such as GPT.
PersonalityTraitsinLargeLanguageModels