Muennighoff commited on
Commit
1541a0a
·
verified ·
1 Parent(s): 35c8dca

Scheduled Commit

Browse files
data/retrieval_individual-2361f09d-0756-4a57-afa8-f8b610b3df1f.jsonl CHANGED
@@ -2,3 +2,5 @@
2
  {"tstamp": 1752257697.8829, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1752257588.4076, "finish": 1752257697.8829, "ip": "", "conv_id": "5a9f531008644dda997bebc9149194b3", "model_name": "voyage-multilingual-2", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
3
  {"tstamp": 1752264584.0926, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1752264580.0344, "finish": 1752264584.0926, "ip": "", "conv_id": "1ab4685cbf2b4783a7dde9f7569cf768", "model_name": "text-embedding-004", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
4
  {"tstamp": 1752264584.0926, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1752264580.0344, "finish": 1752264584.0926, "ip": "", "conv_id": "3915cdd9fdaa40cea7fb430f5383b4fe", "model_name": "BM25", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Artificial intelligence\n\nIn the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to AI Impacts, about $50 billion annually was invested in \"AI\" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in \"AI\".\nAbout 800,000 \"AI\"-related U.S. job openings existed in 2022.\nPhilosophy\nDefining artificial intelligence\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\" He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\""]], "corpus": "wikipedia"}
 
 
 
2
  {"tstamp": 1752257697.8829, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1752257588.4076, "finish": 1752257697.8829, "ip": "", "conv_id": "5a9f531008644dda997bebc9149194b3", "model_name": "voyage-multilingual-2", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
3
  {"tstamp": 1752264584.0926, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1752264580.0344, "finish": 1752264584.0926, "ip": "", "conv_id": "1ab4685cbf2b4783a7dde9f7569cf768", "model_name": "text-embedding-004", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
4
  {"tstamp": 1752264584.0926, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1752264580.0344, "finish": 1752264584.0926, "ip": "", "conv_id": "3915cdd9fdaa40cea7fb430f5383b4fe", "model_name": "BM25", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Artificial intelligence\n\nIn the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to AI Impacts, about $50 billion annually was invested in \"AI\" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in \"AI\".\nAbout 800,000 \"AI\"-related U.S. job openings existed in 2022.\nPhilosophy\nDefining artificial intelligence\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\" He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\""]], "corpus": "wikipedia"}
5
+ {"tstamp": 1752315261.6635, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1752315260.256, "finish": 1752315261.6635, "ip": "", "conv_id": "cbdde94bf4ef46faa6c216790087fe06", "model_name": "GritLM/GritLM-7B", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
6
+ {"tstamp": 1752315261.6635, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1752315260.256, "finish": 1752315261.6635, "ip": "", "conv_id": "5ca78e51af1b424db18372f07aede50c", "model_name": "text-embedding-3-large", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}