text
stringlengths
618
4.71k
source
stringclasses
1 value
SCHRODINGER 'SMEMORY : LARGE LANGUAGE MOD-ELS Wei Wang Department of Computing The Hong Kong Polytechnic University weiuat. wang@connect. polyu. hk Qing Li Department of Computing The Hong Kong Polytechnic University qing-prof. li@polyu. edu. hk ABSTRACT Memory is the foundation of LLMs' functionality, yet past research has lacked an in-depth exploration of their memory capabilities and underlying theory. In this paper, we apply UAT theory to explain the memory mechanism of LLMs and pro-pose a new approach for evaluating LLM performance by comparing the memory capacities of different models. Through extensive experiments, we validate our theory and the memory abilities of LLMs. Finally, we compare the capabilities of the human brain and LLMs, highlighting both their similarities and differences in terms of working mechanisms. 1 I NTRODUCTION Language is not only one of humanity's most essential capabilities but also the foundation of com-munication, knowledge transfer, and the development of civilization. Language models can be seen as a simulation of human intelligence, enabling them to perform tasks that traditionally required human input. Large Language Models (LLMs), particularly those based on the Transformer archi-tecture, are among the hottest research topics in artificial intelligence today. They have begun to mimic human language abilities and are already impacting various aspects of everyday life, such as machine translation, text summarization, sentiment analysis, question-answering systems, and text generation. Despite the impressive capabilities of LLMs, the research on their memory mechanisms remains underexplored. Memory is a crucial ability for humans; without it, we would struggle to complete even the simplest tasks. For instance, something as routine as eating requires remembering where to eat, how to get there, what to eat, and which utensils to use. These everyday memories have become second nature to us, so much so that we rarely think about them consciously. Clearly, memory plays a vital role in guiding nearly every aspect of our daily lives. As LLMs become more powerful, it raises the question: do these models possess memory? If so, what form does it take? How does it differ from human memory? Current research into LLM memory mechanisms primarily follows two directions: Expanding Context Length: This approach aims to equip LLMs with long-term memory by extend-ing the context window Chen et al. (2023); Zhu et al. (2023); Yang (2023); Fei et al. (2023). Since short contexts fail to provide enough information, increasing the context length allows the model to maintain more comprehensive information across long sequences. External Memory Integration: This method involves building memory storage systems Graves et al. (2014); Xiao et al. (2024); Wu et al. (2022); Yang et al. (2024b) that encode and store past events Zhang et al. (2023), allowing the model to retrieve and update memories as needed. Such mecha-nisms enable models to forget or reinforce certain memories over time. While these studies have made strides in addressing the memory limitations of LLMs, they do not fully explain how memory works within these models. For example, when asked ”Who is the Presi-dent of the United States?”, LLMs like GPT-4 or Llama-3 might answer with outdated information, indicating that some form of memory is present. However, this memory isn't from an external stor-age unit but is inferred from the model's training data. This raises fundamental questions: Why do LLMs exhibit this inferential memory? How does it compare to human memory? 1ar Xiv:2409. 10482v1 [cs. CL] 16 Sep 2024
2409.10482v1.pdf
This paper will leverage the Universal Approximation Theorem (UAT) to explain the reasoning and memory abilities of LLMs. The contributions of this work are as follows: An explanation of LLMs' memory and reasoning abilities through the lens of UAT. A new, more objective method for evaluating LLMs' memory and reasoning strengths. A comparison between the memory and reasoning capabilities of LLMs and human cogni-tion. The structure of this paper is as follows: In Section 2, we briefly explain UAT theory and present the mathematical form of UAT as it applies to multi-layer Transformers. In Section 3, we theoretically and experimentally demonstrate that LLMs possess memory capabilities. Finally, in Section 4, we conduct a comprehensive analysis of human and LLM abilities, focusing on memory capacity. 2 UAT AND LLM S The UAT (Cybenko, 2007; Popescu et al., 2009) serves as the foundational theory of deep learning. Our goal is to theoretically explain Transformer-based LLMs using the UAT framework. To do this, we will first present the mathematical form of UAT in Section 2. 1, followed by the correspond-ing UAT form for LLMs in Section 2. 2. We will then use this UAT form to explain the memory characteristics of LLMs. 2. 1 UAT In this section, we will briefly introduce the UAT. The UAT was originally introduced by Cybenko (2007). According to Theorem 2 from Cybenko (2007), if σis any continuous sigmoidal function, then a finite sum of the following form: Gpxq“Nÿ j“1αjσ` WT jx`θj˘ (1) is dense in Cp Inq. Here, Wj PRnandαj, θPRare fixed. For any f PCp Inqandεą0, there exists a function Gpxq: |Gpxq´fpxq|ăεfor all x PIn. (2) This indicates that, with a sufficiently large N, a neural network can approximate any continuous function over a closed interval. Hornik et al. (1989) further establishes that multilayer feedfor-ward networks adhere to the UAT, capable of approximating arbitrary Borel measurable functions. Considering Equation (1), where the function Gpxqoutputs a scalar in R, the framework naturally extends when Gpxqmaps to Rm, requiring approximation across each dimension. To handle this multidimensional output, a straightforward modification to Equation (1) is sufficient: the transfor-mation matrix Wjis adjusted to lie in the space Rnˆm, the bias term θjis redefined as a vector in Rm, andαjis reshaped into a matrix. 2. 2 T HEUAT F ORMAT OF TRANSFORMER-BASED LLM S Multi-Head Attention ( )+ MLP( ) MLP( ) +FFNTransformer ( TF ) Figure 1: The process of Transformer. Current LLMs are primarily based on the memory-enhanced Transformer architecture. In UAT2LLMs Wang & Li (2024b), it has already been demonstrated that the mathematical structure 2
2409.10482v1.pdf
of multi-layer Transformers aligns with the UAT in a general sense. However, unlike the original UAT, the UAT form of Transformer-based models has the ability to dynamically fit functions based on the input. Figure 1 illustrates a basic Transformer module, and according to UAT2LLMs, the corresponding UAT form for Figure 1 is: xi`1“p W1 i`1,1x0`bi`1,1q`i`1ÿ j“1W1 j,3σp W1 j,2x1 0`b1 j,2q (3) where xi`1is the output of i`1-th layer, x0is the input of the network, b1 j,2“p W1 j,2b1 j´1,3` b1 j,2q`W1 j,2UATR j´1, where UATR j´1“řj´1 k“1W1 k,3σp W1 k,2x1 0`b1 k,2q. The term b1 j,2is approx-imated by the jlayer of UAT with x0as input. This enhances the model's ability to dynamically adjust functions based on input. In the multi-head attention mechanism, the parameters change dy-namically with the input. Therefore, in the formula above, all W1 j,1,W1 j,2, and W1 j,3parameters in layer i, where j“1,..., i, are dynamically adjusted based on the input. 3 M EMORY In this section, we will demonstrate the memory capabilities of LLMs. First, in Section 3. 1, we provide a clear definition of memory. Then, in Section 3. 2, we explain the memory mechanism of LLMs using UAT theory and validate their memory characteristics through experiments. In Section 3. 3, we explore the impact of input length on the accuracy of LLM memory. 3. 1 T HEDEFINITION OF MEMORY Before formally studying memory in LLMs, it's important to first define or provide a relatively accurate description of memory. According to Wikipedia: ”Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. ” However, this definition has some fundamental issues. Encoding data or information is not prob-lematic, as information in the brain is transmitted via electrical signals, and we need to encode that information in a way the brain can process. The problem arises with the concepts of ”storage” and ”retrieval. ” The brain does not have a structure analogous to a database for storing information. So, where is this information actually stored? Is it in the neurons of the brain? If so, does a single neuron store a word, or does it store an entire sentence? Question 1: What is Newton's first law? Answer 1:Every object perseveres in its state of rest, or of uniform motion in a right line,except insofar as it is compelled to change that state by forces impressed thereon. So, is this sentence stored within a single neuron? Or does each neuron store just a word, with a spe-cific region of the brain dedicated to this particular memory? Given the vast amount of information humans receive daily, can neurons truly store such an immense volume of data without hindering normal cognitive processes? After all, almost every routine activity requires memory. Take, for example, the simple task of going to the cafeteria: we need to remember when to go, the cafeteria's location, the route to get there, which foods are available, what counts as utensils, where to find them, and how to use them. Moreover, if this memory is stored in a fixed set of neurons, then every time the question is raised, the response should be identical, since the retrieval would be from the same static content. Every word in the response should be exact, with no omissions or additions (even if the information has been abstractly encoded, as long as the encoding and decoding processes are consistent, the content should remain unchanged). This, however, is clearly unreasonable. Therefore, we need a more precise definition of the concept of ”memory”: 3
2409.10482v1.pdf
Memory is defined by two key components: input and output. Input: The input is information that is the same or similar to what the brain (or an LLM) has previously encountered (this is a necessary condition for memory—without input, there is no memory). Output: The result based on the input, which could be correct, incorrect, or forgotten. If the result is correct, it means it aligns with information previously acquired. (Note: for there to be an output, there must be an input; memory doesn't emerge spontaneously. Without specific input conditions, a person wouldn't recall a particular event. ) Using Question 1 as an example, the input is: “What is Newton's first law?” Without this input, no one would suddenly recall Newton's first law. The recollection of Newton's first law is triggered by input related to the theoretical context. This is why input is a necessary condition for memory, as it is the input that stimulates recall. The memory might be accurate, or it might be incorrect, indicating a deviation from previously acquired information—this deviation could be minor, significant, or even total forgetting. For example: Question 1: What is Newton's first law? Answer 2: Minor distortions Every object perseveres in its state of rest, except insofar as it is compelled to change that state by forces impressed thereon. Answer 3: Severe distortions Every object always perseveres in its state of rest. Answer 4: Memory lapse I do not know. In summary, the term ”memory” was traditionally used to refer specifically to human memory before the emergence of LLMs. Now, we believe that LLMs also exhibit memory. Therefore, we will verify the memory characteristics of LLMs based on the definition of memory outlined above. 3. 2 T HEMEMORY MECHANISM AND ABILITY OF LLM S In Section 2. 2, we have introduced the UAT format corresponding to Transformer-based LLMs. This UAT format can dynamically adjust to fit the corresponding output based on the input. Following this idea, could we also consider that the memory feature of LLMs adjusts to specific outputs based on the input? To explore this hypothesis, we designed a simple experiment. In the process of learning, we often need to memorize certain content, such as poems. For this ex-periment, we utilized publicly available datasets from Hugging Face: CN Poems (larryvrh/Chinese-Poems) for Chinese poetry and ENG Poems (jnb666/poems) for English poetry. We preprocessed the data in line with typical human memorization habits, allowing the LLMs to output the content of poems based on basic input information. For CN Poems, the input consisted of the dynasty, author, and title, while for ENG Poems, the input was the author and title. We restricted the length of input data to a maximum of 256 characters. Due to differences in char-acter encoding between Chinese and English, a single Chinese character usually corresponds to one token, while an English word may map to multiple tokens. As a result, after tokenization, the length of Chinese input remains almost unchanged, with a maximum of 256 tokens. In contrast, the En-glish input expands to a maximum of 730 tokens after tokenization. For the experiment, we selected 2,000 poems from each dataset. Table 1: The memory ability of Qwen1. 5-0. 5B-Chat, Qwen2-0. 5B-Instruct, Qwen2-1. 5B-Instruct, bloom-389m-zh, bloom-1b4-zh, bloom-560m, bloom-1b7 on CN Poems and ENG Poems. Models Qwen1. 5-0. 5B-Chat Qwen2-0. 5B-Instruct Qwen2-1. 5B-Instructbloom-389m-zhbloom-1b4-zhbloom-560mbloom-1b7 CN Poems Acc 68. 85 77. 5 96. 9 75. 55 96. 6--ENG Poems Acc 99. 85 99. 85 99. 9--99. 2 99. 15 To test the memory ability of LLMs, we fine-tuned the CN Poems and ENG Poems on Qwen series models Bai et al. (2023); Yang et al. (2024a) and bloom Workshop et al. (2023) models. The results in Table 1 indicate that LLMs possess memory capabilities and align perfectly with the definition 4
2409.10482v1.pdf
of memory we provided. The training process is akin to giving a person 2,000 poems and asking them to memorize as many as possible, with a limit of reading each poem only 100 times. In the CN Poems dataset, the best-performing models were Qwen2-1. 5B-Instruct and bloom-1b4-zh, which remembered 1,938 and 1,932 poems respectively. In contrast, for the ENG Poems dataset, nearly all models memorized all the poems. These results are remarkable. A human, without specialized memory training, would struggle to memorize even 100 poems under such conditions, whereas the LLMs were able to memorize almost 100% of the 2,000 poems. However, overall performance on the CN Poems dataset was weaker for models like Qwen1. 5-0. 5B-Chat, Qwen2-0. 5B-Instruct, and bloom-389m-zh. We believe this is due to insufficient pre-training, resulting in relatively poor language comprehension. For instance, Qwen2-0. 5B-Instruct outperformed Qwen1. 5-0. 5B-Chat, even though both models had similar data and model sizes. Stronger language understanding suggests a better fit for natural language, which can be interpreted as a model having a higher language comprehension ability. This, in turn, aids in model expansion. The original Qwen documentation supports this, as Qwen2-0. 5B-Instruct outperforms Qwen1. 5-0. 5B-Chat across a variety of tasks. Similarly, a person with better language comprehension would learn other related information more efficiently. The availability and comprehensiveness of English-language data compared to Chinese likely contributed to the models' superior performance in fitting English texts. Another factor may be that Chinese is a more complex language, and smaller models have limited learning capacity. In larger models such as Qwen2-1. 5B-Instruct and bloom-1b4-zh, performance improved significantly over Qwen2-0. 5B-Instruct and bloom-389m-zh, approaching the results seen in the ENG Poems dataset. Figure 2 shows the results of generating a full poem based on partial input information. As seen in the figure, after training, the models can successfully recite the entire poem using only the title and author. Figure 2 displays some examples of prediction errors. Input: Dictation: Dynasty: 宋代 Author: 李弥逊 Title: 题大儒寮小阁Model: Qwen1. 5-0. 5B-Chat Output: 青鞋踏尽剑鋩山,借枕僧房落照间。 高屋凭虚听泉语,岭云应似我身闲。 Input: Dictation: Dynasty: 唐代 Author: 岑参 Title: 故仆射裴公挽歌三首 Output: 盛德资邦杰,嘉谟作世程。门瞻驷马贵,时仰八龙名。 罢市秦人送,还乡绛老迎。莫埋丞相印,留著付玄成。 五府瞻高位,三台丧大贤。礼容还故绛,宠赠冠新田。 气歇汾阴鼎,魂飞京兆阡。先时剑已没,陇树久苍然。 富贵徒言久,乡闾殁后归。锦衣都未著,丹旐忽先飞。 哀挽辞秦塞,悲笳出帝畿。遥知九原上,渐觉吊人稀。Input: Dictation: Author:Michael Ryan Title: Larkinesque Model: Qwen1. 5-0. 5B-Chat Output: Reading in the paper a summary of a five-year psychological study that shows those perceived as most beautiful are treated dif ferently, I think they could have just asked me, remembering a kind of pudgy kid and late puberty, the bloody noses and wisecracks because I wore glasses, though we all know by now how awful it is for the busty starlet no one takes seriously, the loveliest women I've lunched with lamenting the opacity of the body, they can never trust a man's interest even when he seems not just out for sex (eyes focus on me above rim of wineglass), and who would want to live like this? And what does beauty do to a man?-Don Juan, Casanova, Lord Byron-those fiery eyes and steel jawlines can front a furnace of self-loathing, all those breathless women rushing to him while hubby's at the of fice or ball game, primed to be consumed by his beauty while he stands next to it, watching. So maybe the looks we're dealt are best. It's only common sense that happiness depends on some bearable deprivation or defect, and who knows what conflicts great beauty could have caused, what cruelties one might have suf fered from those now friends, what unmanageable possibilities smiling at every small turn? So if I get up to draw a tumbler of ordinary tap water and think what if this were nectar dripping from delicious burning fingers, will all I've missed knock me senseless? No. Of course not. It won't. Figure 2: The process of Transformer. 5
2409.10482v1.pdf
Model: Qwen1. 5-0. 5B-Chat Output: Let us build a fire In the living room Where living is done. The kitchen plants need Mowing and the cabinets To be axed and split into Kindling. W e'll slink inside Dresses, apply the brightest Rouge. Prettiest conflagration. Label: When the day closes, my grief's a laundromat done tumbling away, or a dry cleaner, its pressed shirts hanging like ef figies, or flags of no and every nation under faux stars, neon constellations. I sweep the evening, hoping for a kind of incandescence, yet what the day takes is light and light. Input: Dictation: Author:Joseph O. Legaspi Title: Poetry Output: 晚出临川逢岭树,飞花乍望空搀玉。 追思往事今如梦,晚景凄凉事事休。 Label: 野寺寻春花已迟,背岩惟有两三枝。 明朝携酒犹堪醉,为报春风且莫吹。Input: Dictation: Dynasty: 唐代 Author: 李端 Title: 春晚游鹤林寺寄使府诸公Model: Qwen1. 5-0. 5B-Chat Figure 3: The process of Transformer. 3. 3 T HETOKEN LENGTH EFFECT Additionally, we believe that the length of the input text has a significant impact on the memory capabilities of LLMs—the longer the text, the harder it is to remember. To verify this, we set the input text length in the CN Poems dataset to be between 256 and 512 characters (we used Chinese text because the relationship between token length and the original text length is not fixed in English). After fine-tuning the model for 100 epochs on CN Poems, the results are shown in Table 2. It is evident that as the text length increases, the difficulty for the model to remember the content also increases. Table 2: The memory ability of Qwen1. 5-0. 5B-Chat, Qwen2-0. 5B-Instruct, Qwen2-1. 5B-Instruct, bloom-389m-zh, bloom-1b4-zh on CN Poems in the condition of longer prediction. Models Qwen1. 5-0. 5B-Chat Qwen2-0. 5B-Instruct Qwen2-1. 5B-Instructbloom-389m-zhbloom-1b4-zh CN Poems Acc 44. 9 56. 85 86. 95 68. 6 93. 65 4 A C OMPARISION BETWEEN HUMAN AND LLM S Based on the definition of memory provided in Section 3 and the experimental results, we believe that LLMs indeed possess memory capabilities, and this ability shows no fundamental difference from human memory. Building on this, we extend the concept of memory in the brain to other cognitive abilities, such as social skills, imagination, and creativity. All of these can be attributed to reasoning ability based on existing knowledge. Currently, mainstream academia holds that LLMs either lack human-like reasoning abilities or per-form poorly in this regard. So, do LLMs actually possess reasoning capabilities similar to humans? Why do LLMs seem to exhibit weak reasoning skills? Before formally addressing this question, let's define reasoning ability in simple terms: Reasoning Ability: The ability to generate results consistent with previously learned knowledge based on specific inputs. From this definition, we can identify several key factors in reasoning ability: the knowledge learned, the specific input, and the ability to produce results aligned with the learned knowledge. According to this definition, the memory capabilities of LLMs can also be considered a form of reasoning. As shown in Figure 2, LLMs can iteratively learn to reconstruct a complete poem based solely on its title and author. These poems are not stored in a particular weight within the model but are dy-namically generated based on the input. This is why we describe LLMs' memory as ”Schr ¨odinger's 6
2409.10482v1.pdf
memory”—we can only determine if an LLM remembers something after posing a question and examining the output. Otherwise, it is impossible to know. Humans operate similarly: we can only verify our memory by answering specific questions; otherwise, we cannot assess it. For example, if you ask someone how many poems they remember, they might not be able to give a precise answer, but if you ask them if they remember a specific poem, they can usually respond. As shown in Figure 3, although some predictions are incorrect, the results still align with linguistic conventions and match the poem's title to some degree. This could be seen as a form of creativity in the absence of exact memory, which is also a form of reasoning. Thus, we propose that the brain operates like a model that dynamically fits inputs. In some sense, the mathematical model of the human brain may be similar to, or even a more advanced version of, the dynamic UAT model based on Transformers. However, we believe their underlying mechanisms are the same: they both dynamically fit corresponding outputs based on inputs. So why do LLMs seem to perform poorly in reasoning tasks? We believe there are three main factors: model size, data quality and quantity, and model architecture. Model Size: In practice, larger LLMs tend to be more powerful. Theoretically, as shown in the UAT2LLMs Wang & Li (2024b) and UAT2Parallel Wang & Li (2024a) models, the more layers a model has, the stronger its dynamic fitting ability becomes, resulting in better performance. Data Quality and Quantity: In practice, current LLMs have improved greatly due to train-ing on vast amounts of data. The larger and higher quality the dataset, the stronger the model's performance. From the perspective of human learning, humans spend decades in education, from primary school through university. In terms of language exposure, humans are immersed in language from birth, and without this exposure, we wouldn't be able to develop strong language skills. Model Architecture: Current models often consolidate all tasks into a single, sequential network, which can result in limited functionality or task complexity. Research suggests that the human brain is modular, with different regions responsible for different tasks. Sim-ilarly, we could design multiple large models specialized for different tasks, and then use the UAT2Parallel framework to enable parallel processing among these models, managed by a central control unit. Since we propose that both LLMs and the brain function as dynamic models that fit inputs, why build such dynamic models? What are the advantages of this approach? We believe that this dynamic fit-ting capability gives the brain infinite possibilities. The brain doesn't need to remember everything; it only needs to focus on what's important. Imagine if a newborn's brain were pre-loaded with the weights of its parents; there would be no need to fit the world because most of the external environ-ment remains constant. In such a scenario, the brain's weights would hardly ever be updated, and the person would lose creativity. Dynamic fitting, however, bestows the brain with creativity. Since not everything in the brain is correct, continuous interaction with the outside world updates these weights. One of these updates might bring us closer to the truth, allowing us to explore new ideas and eventually achieve innovation. A great example of dynamic fitting in the brain is Henry Molaison Scoville & Milner (1957); Victor et al. (1961); Milner & Klein (2015). After his hippocampus Bliss & Collingridge (1993); Squire (1992); Erickson et al. (2011); Eckardt (1980) was damaged, he could no longer form new long-term memories, though his existing memories remained intact. We believe that the hippocampus acts as a switch controlling whether the weights responsible for long-term memory in the brain can be updated. Once the hippocampus is damaged, the brain's weight parameters can no longer change, meaning that while past inputs (before the hippocampal damage) can still produce corresponding outputs (i. e., recalling past events), the inability to update weights prevents the formation of new memories. 5 C ONCLUSION In this paper, we demonstrate that LLMs possess memory capabilities, which are enabled by their Transformer-based architecture. This architecture functions as a dynamic fitting UAT model, with a 7
2409.10482v1.pdf
strong ability to adaptively fit inputs. As a result, LLMs can recall entire content based on minimal input information. Since this memory can only be confirmed when triggered by input, we refer to it as ”Schr ¨odinger's memory. ” Through extensive experiments, we validated that the memory mechanism of LLMs aligns with this theory. Additionally, we compared LLMs with the human brain and found that their working mechanisms are similar, as both dynamically fit outputs based on inputs. REFERENCES Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report. ar Xiv preprint ar Xiv:2309. 16609, 2023. Tim V. P. Bliss and Graham L. Collingridge. A synaptic model of memory: long-term potentiation in the hippocampus. Nature, 361:31-39, 1993. URL https://api. semanticscholar. org/Corpus ID:4326182. Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. Longlora: Efficient fine-tuning of long-context large language models. Ar Xiv, abs/2309. 12307, 2023. URL https://api. semanticscholar. org/Corpus ID:262084134. G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, pp. 303-314, Jan 2007. doi: 10. 1007/bf02551274. URL http://dx. doi. org/10. 1007/bf02551274. Michael J. Eckardt. The hippocampus as a cognitive map. Journal of Nervous and Mental Dis-ease, 168:191-192, 1980. URL https://api. semanticscholar. org/Corpus ID: 144502478. Kirk I. Erickson, Michelle W. V oss, Ruchika Shaurya Prakash, Chandramallika Basak, Amanda N. Szabo, Laura Chaddock, Jennifer S. Kim, Susie Heo, Heloisa Alves, Siobhan M. White, Thomas R W ´ojcicki, Emily L. Mailey, Victoria Jeanne Vieira, Stephen A. Martin, Brandt D. Pence, Jeffrey A. Woods, Edward Mc Auley, and Arthur F. Kramer. Exercise training increases size of hippocampus and improves memory. Proceedings of the National Academy of Sciences, 108:3017-3022, 2011. URL https://api. semanticscholar. org/Corpus ID: 13980824. Wei Zhi Fei, Xueyan Niu, Pingyi Zhou, Lu Hou, Bo Bai, Lei Deng, and Wei Han. Extending context window of large language models via semantic compression. In Annual Meeting of the Associa-tion for Computational Linguistics, 2023. URL https://api. semanticscholar. org/ Corpus ID:266335580. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. Ar Xiv, abs/1410. 5401, 2014. URL https://api. semanticscholar. org/Corpus ID:15299054. Kurt Hornik, Maxwell B. Stinchcombe, and Halbert L. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359-366, 1989. URL https://api. semanticscholar. org/Corpus ID:2757547. B. Milner and Denise Klein. Loss of recent memory after bilateral hippocampal lesions: memory and memories—looking back and looking forward. Journal of Neurology, Neurosurgery & Psy-chiatry, 87:230-230, 2015. URL https://api. semanticscholar. org/Corpus ID: 13234383. Marius-Constantin Popescu, Valentina E Balas, Liliana Perescu-Popescu, and Nikos Mastorakis. Multilayer perceptron and neural networks. WSEAS Transactions on Circuits and Systems, 8(7): 579-588, 2009. 8
2409.10482v1.pdf
William Beecher Scoville and B. Milner. Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery & Psychiatry, 20:11-21, 1957. URL https://api. semanticscholar. org/Corpus ID:20365179. Larry R. Squire. Memory and the hippocampus: a synthesis from findings with rats, mon-keys, and humans. Psychological review, 99 2:195-231, 1992. URL https://api. semanticscholar. org/Corpus ID:14104324. Maurice Victor, Jay B. Angevine, Elliott L. Mancall, and C M Fisher. [memory loss with lesions of hippocampal formation. report of a case with some remarks on the anatomical basis of memory]. Archives of neurology, 5:244-63, 1961. URL https://api. semanticscholar. org/ Corpus ID:12965148. Wei Wang and Qing Li. Universal approximation theory: Foundations for parallelism in neural networks. Ar Xiv, abs/2407. 21670, 2024a. URL https://api. semanticscholar. org/ Corpus ID:271571549. Wei Wang and Qing Li. Universal approximation theory: The basic theory for large language models, 2024b. URL https://arxiv. org/abs/2407. 00958. Big Science Workshop, :, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili ´c, Daniel Hesslow, Roman Castagn ´e, Alexandra Sasha Luccioni, Franc ¸ois Yvon, Matthias Gall ´e, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammana-manchi, Thomas Wang, Beno ˆıt Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina Mc Millan-Major, Iz Beltagy, Huu Nguyen, Lu-cile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurenc ¸on, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Al-ham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo Gonz ´alez Ponferrada, Efrat Levkovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, G ´erard Dupont, Germ ´an Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, J ¨org Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro V on Werra, Leon Weber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero Mu ˜noz, Maraim Masoud, Mar ´ıa Grandury, Mario ˇSaˇsko, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subra-mani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis L ´opez, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, So-maieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika Laippala, Violette Lep-ercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenglei Si, Davut Emre Tas ¸ar, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, An-drea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajyoti Datta, Eliza Szczechla, Gunjan Chh-ablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo Manica, Nihal Nayak, Ryan Tee-han, Samuel Albanie, Sheng Shen, Srulik Ben-David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiangru Tang, Zheng-Xin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myr-iam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre Franc ¸ois Lavall ´ee, R ´emi Lacroix, Samyam Rajbhandari, Sanchit Gandhi, Shaden Smith, St ´ephane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh, Anas-tasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aur ´elie N ´ev´eol, Charles Lovering, Dan Garrette, Deepak Tunuguntla, Ehud Reiter, Ekaterina Taktasheva, Ekaterina V oloshina, Eli Bogdanov, Genta Indra Winata, Hailey Schoelkopf, Jan-Christoph Kalo, Jekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat, 9
2409.10482v1.pdf
Miruna Clinciu, Najoung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shachar Mirkin, Shani Pais, Tatiana Shav-rina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz, Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger, Zden ˇek Kasner, Alice Rueda, Amanda Pestana, Amir Feizpour, Ammar Khan, Amy Faranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Arezoo Abdollahi, Aycha Tammour, Azadeh Haji-Hosseini, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos Mu ˜noz Ferrandis, Daniel Mc Duff, Danish Contractor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Ed-ward Tan, Emi Baylor, Ezinwanne Ozoani, Fatima Mirza, Frankline Ononiwu, Habib Rezane-jad, Hessie Jones, Indrani Bhattacharya, Irene Solaiman, Irina Sedenko, Isar Nejadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Livia Dutra, Mairon Samagaio, Maraim Elbadri, Mar-got Mieskes, Marissa Gerchick, Martha Akinlolu, Michael Mc Kenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen Rajani, Nour Elkott, Nour Fahmy, Olanre-waju Samuel, Ran An, Rasmus Kromann, Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, An-tonio Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo Wang, Caio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Cl ´ementine Fourrier, Daniel Le ´on Peri ˜n´an, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio Barth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U. Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc P `amies, Maria A Castillo, Marianna Nezhurina, Mario S ¨anger, Matthias Samwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Kumar, Stefan Schweter, Sushil Bharati, Tanmay Laud, Th ´eo Gi-gant, Tomoya Kainuma, Wojciech Kusa, Yanis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin Xu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, and Thomas Wolf. Bloom: A 176b-parameter open-access multilingual language model, 2023. URL https://arxiv. org/abs/2211. 05100. Yuhuai Wu, Markus Norman Rabe, De Lesley S. Hutchins, and Christian Szegedy. Memorizing transformers. Ar Xiv, abs/2203. 08913, 2022. URL https://api. semanticscholar. org/Corpus ID:247519194. Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, and Maosong Sun. Infllm: Training-free long-context extrapolation for llms with an efficient context memory. 2024. URL https://api. semanticscholar. org/ Corpus ID:267523068. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan. Qwen2 technical report. ar Xiv preprint ar Xiv:2407. 10671, 2024a. Hongkang Yang, Zehao Lin, Wenjin Wang, Hao Wu, Zhiyu Li, Bo Tang, Wenqiang Wei, Jinbo Wang, Zeyun Tang, Shichao Song, Chenyang Xi, Yu Yu, Kai Chen, Feiyu Xiong, Linpeng Tang, and E Weinan. Memory3: Language modeling with explicit memory. Ar Xiv, abs/2407. 01178, 2024b. URL https://api. semanticscholar. org/Corpus ID:270870116. Jianxin Yang. Longqlora: Efficient and effective method to extend context length of large lan-guage models. Ar Xiv, abs/2311. 04879, 2023. URL https://api. semanticscholar. org/Corpus ID:265050454. 10
2409.10482v1.pdf
Zhenyu (Allen) Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher R ´e, Clark W. Barrett, Zhangyang Wang, and Beidi Chen. H2o: Heavy-hitter oracle for efficient generative inference of large language mod-els. Ar Xiv, abs/2306. 14048, 2023. URL https://api. semanticscholar. org/ Corpus ID:259263947. Dawei Zhu, Nan Yang, Liang Wang, Yifan Song, Wenhao Wu, Furu Wei, and Sujian Li. Pose: Effi-cient context window extension of llms via positional skip-wise training. Ar Xiv, abs/2309. 10400, 2023. URL https://api. semanticscholar. org/Corpus ID:262053659. 11
2409.10482v1.pdf
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card