Dataset Viewer
Auto-converted to Parquet
id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
39 values
tags
sequencelengths
1
4.05k
created_time
int64
1,646B
1,742B
last_modified
timestamp[s]date
2020-05-14 13:13:12
2025-03-18 10:01:09
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_task
sequencelengths
1
10
is_bionlp
stringclasses
3 values
fathyshalab/massive_play-roberta-large-v1-2-0.64
fathyshalab
text-classification
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
1,675,873,072,000
2023-02-08T16:18:14
8
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # fathyshalab/massive_play-roberta-large-v1-2-0.64 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-2-0.64") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
LoneStriker/gemma-7b-4.0bpw-h6-exl2
LoneStriker
text-generation
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:2305.14314", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,708,617,308,000
2024-02-22T15:57:48
6
0
--- library_name: transformers license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms tags: [] extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning examples You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314) * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
Non_BioNLP
ravimehta/Test
ravimehta
summarization
[ "asteroid", "summarization", "en", "dataset:togethercomputer/RedPajama-Data-1T", "region:us" ]
1,687,455,278,000
2023-06-22T17:35:55
0
0
--- datasets: - togethercomputer/RedPajama-Data-1T language: - en library_name: asteroid metrics: - bleurt pipeline_tag: summarization ---
[ "SUMMARIZATION" ]
Non_BioNLP
Ahmed107/nllb200-ar-en_v11.1
Ahmed107
translation
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "translation", "generated_from_trainer", "base_model:Ahmed107/nllb200-ar-en_v8", "base_model:finetune:Ahmed107/nllb200-ar-en_v8", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,701,932,253,000
2023-12-07T08:02:05
7
1
--- base_model: Ahmed107/nllb200-ar-en_v8 license: cc-by-nc-4.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: nllb200-ar-en_v11.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb200-ar-en_v11.1 This model is a fine-tuned version of [Ahmed107/nllb200-ar-en_v8](https://huggingface.co/Ahmed107/nllb200-ar-en_v8) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5309 - Bleu: 65.0906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "TRANSLATION" ]
Non_BioNLP
satish860/distilbert-base-uncased-finetuned-emotion
satish860
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,649,756,134,000
2022-08-11T12:44:06
47
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.923 name: Accuracy - type: f1 value: 0.9232534263543563 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2174 - Accuracy: 0.923 - F1: 0.9233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.839 | 1.0 | 250 | 0.3212 | 0.907 | 0.9049 | | 0.2516 | 2.0 | 500 | 0.2174 | 0.923 | 0.9233 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.11.0a0+17540c5 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
muhtasham/medium-mlm-imdb-target-tweet
muhtasham
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,670,742,460,000
2022-12-11T07:10:48
114
0
--- datasets: - tweet_eval license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: medium-mlm-imdb-target-tweet results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - type: accuracy value: 0.7620320855614974 name: Accuracy - type: f1 value: 0.7599032399785389 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium-mlm-imdb-target-tweet This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/medium-mlm-imdb) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.6869 - Accuracy: 0.7620 - F1: 0.7599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.456 | 4.9 | 500 | 0.8890 | 0.7754 | 0.7720 | | 0.0578 | 9.8 | 1000 | 1.3492 | 0.7540 | 0.7509 | | 0.0173 | 14.71 | 1500 | 1.6143 | 0.7594 | 0.7584 | | 0.0124 | 19.61 | 2000 | 1.6869 | 0.7620 | 0.7599 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
ericzzz/falcon-rw-1b-instruct-openorca
ericzzz
text-generation
[ "transformers", "safetensors", "falcon", "text-generation", "text-generation-inference", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "model-index", "autotrain_compatible", "region:us" ]
1,700,859,032,000
2024-03-05T00:49:13
2,405
11
--- datasets: - Open-Orca/SlimOrca language: - en license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference inference: false model-index: - name: falcon-rw-1b-instruct-openorca results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 34.56 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 60.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 28.77 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 37.42 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 60.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 3.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-instruct-openorca name: Open LLM Leaderboard --- # 🌟 Falcon-RW-1B-Instruct-OpenOrca Falcon-RW-1B-Instruct-OpenOrca is a 1B parameter, causal decoder-only model based on [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) and finetuned on the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset. **✨Check out our new conversational model [Falcon-RW-1B-Chat](https://huggingface.co/ericzzz/falcon-rw-1b-chat)!✨** **📊 Evaluation Results** Falcon-RW-1B-Instruct-OpenOrca was the #1 ranking model (unfortunately not anymore) on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) in ~1.5B parameters category! A detailed result can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca). | Metric | falcon-rw-1b-instruct-openorca | falcon-rw-1b | |------------|-------------------------------:|-------------:| | ARC | 34.56 | 35.07 | | HellaSwag | 60.93 | 63.56 | | MMLU | 28.77 | 25.28 | | TruthfulQA | 37.42 | 35.96 | | Winogrande | 60.69 | 62.04 | | GSM8K | 3.41 | 0.53 | | **Average**| **37.63** | **37.07** | **🚀 Motivations** 1. To create a smaller, open-source, instruction-finetuned, ready-to-use model accessible for users with limited computational resources (lower-end consumer GPUs). 2. To harness the strength of Falcon-RW-1B, a competitive model in its own right, and enhance its capabilities with instruction finetuning. ## 📖 How to Use The model operates with a structured prompt format, incorporating `<SYS>`, `<INST>`, and `<RESP>` tags to demarcate different parts of the input. The system message and instruction are placed within these tags, with the `<RESP>` tag triggering the model's response. **📝 Example Code** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = 'ericzzz/falcon-rw-1b-instruct-openorca' tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( 'text-generation', model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map='auto', ) system_message = 'You are a helpful assistant. Give short answers.' instruction = 'What is AI? Give some examples.' prompt = f'<SYS> {system_message} <INST> {instruction} <RESP> ' response = pipeline( prompt, max_length=200, repetition_penalty=1.05 ) print(response[0]['generated_text']) # AI, or Artificial Intelligence, refers to the ability of machines and software to perform tasks that require human intelligence, such as learning, reasoning, and problem-solving. It can be used in various fields like computer science, engineering, medicine, and more. Some common applications include image recognition, speech translation, and natural language processing. ``` ## ⚠️ Limitations This model may generate inaccurate or misleading information and is prone to hallucination, creating plausible but false narratives. It lacks the ability to discern factual content from fiction and may inadvertently produce biased, harmful or offensive content. Its understanding of complex, nuanced queries is limited. Users should be aware of this and verify any information obtained from the model. The model is provided 'as is' without any warranties, and the creators are not liable for any damages arising from its use. Users are responsible for their interactions with the model. ## 📬 Contact For further inquiries or feedback, please contact at [email protected]. ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-instruct-openorca) | Metric |Value| |---------------------------------|----:| |Avg. |37.63| |AI2 Reasoning Challenge (25-Shot)|34.56| |HellaSwag (10-Shot) |60.93| |MMLU (5-Shot) |28.77| |TruthfulQA (0-shot) |37.42| |Winogrande (5-shot) |60.69| |GSM8k (5-shot) | 3.41|
[ "TRANSLATION" ]
Non_BioNLP
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,716,459,970,000
2024-05-23T10:26:22
9
0
--- datasets: - fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742 - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-base-en-v1.5**](https://huggingface.co/BAAI/bge-base-en-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
PragmaticPete/tinyqwen
PragmaticPete
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,718,651,742,000
2024-06-17T19:19:41
14
0
--- language: - en license: apache-2.0 pipeline_tag: text-generation tags: - pretrained --- # Qwen2-0.5B ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Performance The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc. The datasets for evaluation include: **English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot) **Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript) **Math Tasks**: GSM8K (4-shot), MATH (4-shot) **Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot) **Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot) #### Qwen2-0.5B & Qwen2-1.5B performances | Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B | | :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: | |#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B | |MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** | |MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 | |Theorem QA | - | - | - |- | 8.9 | **15.0** | |HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 | |MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 | |GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** | |MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** | |BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 | |HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 | |Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 | |ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 | |TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** | |C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** | |CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
[ "QUESTION_ANSWERING", "TRANSLATION" ]
Non_BioNLP
Pclanglais/Larth-Mistral
Pclanglais
text-generation
[ "transformers", "pytorch", "mistral", "text-generation", "fr", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
1,696,941,413,000
2023-10-21T21:16:07
20
5
--- language: - fr library_name: transformers license: cc-by-4.0 pipeline_tag: text-generation widget: - text: 'Answer in Etruscan: Who is the father of Lars?' example_title: Lars inference: parameters: temperature: 0.7 repetition_penalty: 1.2 --- Larth-Mistral is the first LLM based on the Etruscan language, fine-tuned on 1087 original inscriptions. Larth-Mistral supports cross-linguistic instructions (question in English, answer in Etruscan) and automated translations. The formula to use are: * *Answer in Etruscan: [Instruction in English]* * *Translate in Etruscan: [Text in English]* * *Translate in English: [Text in Etruscan]* Larth-Mistral is not really an LLM *in* Etruscan but rather a poetic approximation of what LLM in Etruscan could be. Currently there is not even enough material to decipher the language, even less so for text generation. The inscriptions used by Larth-Mistral were collected by the [Larth-Etruscan-NLP](https://github.com/GianlucaVico/Larth-Etruscan-NLP/tree/main) project ## Examples of instructions **Who is the father of Lars?** θre larsusas clareza ler θvele lupi lsveneii lcaii vnii lcaii avl ppa clesu claσ' menu hraχ **What is the best recipe for an apple pie?** θuva {m} : apuσ kaklaχ anaχ turuχ leθthturic lam θas acraχ clesinice leslumna τiχie huθ **What is the best city of the dodecapolis?** εις σθαχθρ τημενθ σπολισ δωδεκα πολισ σθαχθρ συφθ** **Unexpectedly Larth-Mistral has switched to ancient greek.
[ "TRANSLATION" ]
Non_BioNLP
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,716,922,458,000
2024-05-28T18:54:49
6
0
--- datasets: - fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241 - allenai/c4 language: - en - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
pEpOo/catastrophy8
pEpOo
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "region:us" ]
1,702,908,844,000
2023-12-18T14:14:25
50
0
--- base_model: sentence-transformers/all-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: "Rly tragedy in MP: Some live to recount horror: \x89ÛÏWhen I saw coaches\ \ of my train plunging into water I called my daughters and said t..." - text: You must be annihilated! - text: 'Severe Thunderstorms and Flash Flooding Possible in the Mid-South and Midwest http://t.co/uAhIcWpIh4 #WEATHER #ENVIRONMENT #CLIMATE #NATURE' - text: 'everyone''s wonder who will win and I''m over here wondering are those grapes real ?????? #BB17' - text: i swea it feels like im about to explode ?? inference: true model-index: - name: SetFit with sentence-transformers/all-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9203152364273205 name: Accuracy --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>'To fight bioterrorism sir.'</li><li>'85V-265V 10W LED Warm White Light Motion Sensor Outdoor Flood Light PIR Lamp AUC http://t.co/NJVPXzMj5V http://t.co/Ijd7WzV5t9'</li><li>'Photo: referencereference: xekstrin: I THOUGHT THE NOSTRILS WERE EYES AND I ALMOST CRIED FROM FEAR partake... http://t.co/O7yYjLuKfJ'</li></ul> | | 1 | <ul><li>'Police officer wounded suspect dead after exchanging shots: RICHMOND Va. (AP) \x89ÛÓ A Richmond police officer wa... http://t.co/Y0qQS2L7bS'</li><li>"There's a weird siren going off here...I hope Hunterston isn't in the process of blowing itself to smithereens..."</li><li>'Iranian warship points weapon at American helicopter... http://t.co/cgFZk8Ha1R'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9203 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("pEpOo/catastrophy8") # Run inference preds = model("You must be annihilated!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 14.5506 | 54 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 438 | | 1 | 323 | ### Training Hyperparameters - batch_size: (20, 20) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:-----:|:-------------:|:---------------:| | 0.0001 | 1 | 0.3847 | - | | 0.0044 | 50 | 0.3738 | - | | 0.0088 | 100 | 0.2274 | - | | 0.0131 | 150 | 0.2747 | - | | 0.0175 | 200 | 0.2251 | - | | 0.0219 | 250 | 0.2562 | - | | 0.0263 | 300 | 0.2623 | - | | 0.0307 | 350 | 0.1904 | - | | 0.0350 | 400 | 0.2314 | - | | 0.0394 | 450 | 0.1669 | - | | 0.0438 | 500 | 0.1135 | - | | 0.0482 | 550 | 0.1489 | - | | 0.0525 | 600 | 0.1907 | - | | 0.0569 | 650 | 0.1728 | - | | 0.0613 | 700 | 0.125 | - | | 0.0657 | 750 | 0.109 | - | | 0.0701 | 800 | 0.0968 | - | | 0.0744 | 850 | 0.2101 | - | | 0.0788 | 900 | 0.1974 | - | | 0.0832 | 950 | 0.1986 | - | | 0.0876 | 1000 | 0.0747 | - | | 0.0920 | 1050 | 0.1117 | - | | 0.0963 | 1100 | 0.1092 | - | | 0.1007 | 1150 | 0.1582 | - | | 0.1051 | 1200 | 0.1243 | - | | 0.1095 | 1250 | 0.2873 | - | | 0.1139 | 1300 | 0.2415 | - | | 0.1182 | 1350 | 0.1264 | - | | 0.1226 | 1400 | 0.127 | - | | 0.1270 | 1450 | 0.1308 | - | | 0.1314 | 1500 | 0.0669 | - | | 0.1358 | 1550 | 0.1218 | - | | 0.1401 | 1600 | 0.114 | - | | 0.1445 | 1650 | 0.0612 | - | | 0.1489 | 1700 | 0.0527 | - | | 0.1533 | 1750 | 0.1421 | - | | 0.1576 | 1800 | 0.0048 | - | | 0.1620 | 1850 | 0.0141 | - | | 0.1664 | 1900 | 0.0557 | - | | 0.1708 | 1950 | 0.0206 | - | | 0.1752 | 2000 | 0.1171 | - | | 0.1795 | 2050 | 0.0968 | - | | 0.1839 | 2100 | 0.0243 | - | | 0.1883 | 2150 | 0.0233 | - | | 0.1927 | 2200 | 0.0738 | - | | 0.1971 | 2250 | 0.0071 | - | | 0.2014 | 2300 | 0.0353 | - | | 0.2058 | 2350 | 0.0602 | - | | 0.2102 | 2400 | 0.003 | - | | 0.2146 | 2450 | 0.0625 | - | | 0.2190 | 2500 | 0.0173 | - | | 0.2233 | 2550 | 0.1017 | - | | 0.2277 | 2600 | 0.0582 | - | | 0.2321 | 2650 | 0.0437 | - | | 0.2365 | 2700 | 0.104 | - | | 0.2408 | 2750 | 0.0156 | - | | 0.2452 | 2800 | 0.0034 | - | | 0.2496 | 2850 | 0.0343 | - | | 0.2540 | 2900 | 0.1106 | - | | 0.2584 | 2950 | 0.001 | - | | 0.2627 | 3000 | 0.004 | - | | 0.2671 | 3050 | 0.0074 | - | | 0.2715 | 3100 | 0.0849 | - | | 0.2759 | 3150 | 0.0009 | - | | 0.2803 | 3200 | 0.0379 | - | | 0.2846 | 3250 | 0.0109 | - | | 0.2890 | 3300 | 0.0019 | - | | 0.2934 | 3350 | 0.0154 | - | | 0.2978 | 3400 | 0.0017 | - | | 0.3022 | 3450 | 0.0003 | - | | 0.3065 | 3500 | 0.0002 | - | | 0.3109 | 3550 | 0.0025 | - | | 0.3153 | 3600 | 0.0123 | - | | 0.3197 | 3650 | 0.0007 | - | | 0.3240 | 3700 | 0.0534 | - | | 0.3284 | 3750 | 0.0004 | - | | 0.3328 | 3800 | 0.0084 | - | | 0.3372 | 3850 | 0.0088 | - | | 0.3416 | 3900 | 0.0201 | - | | 0.3459 | 3950 | 0.0002 | - | | 0.3503 | 4000 | 0.0102 | - | | 0.3547 | 4050 | 0.0043 | - | | 0.3591 | 4100 | 0.0124 | - | | 0.3635 | 4150 | 0.0845 | - | | 0.3678 | 4200 | 0.0002 | - | | 0.3722 | 4250 | 0.0014 | - | | 0.3766 | 4300 | 0.1131 | - | | 0.3810 | 4350 | 0.0612 | - | | 0.3854 | 4400 | 0.0577 | - | | 0.3897 | 4450 | 0.0235 | - | | 0.3941 | 4500 | 0.0156 | - | | 0.3985 | 4550 | 0.0078 | - | | 0.4029 | 4600 | 0.0356 | - | | 0.4073 | 4650 | 0.0595 | - | | 0.4116 | 4700 | 0.0001 | - | | 0.4160 | 4750 | 0.0018 | - | | 0.4204 | 4800 | 0.0013 | - | | 0.4248 | 4850 | 0.0008 | - | | 0.4291 | 4900 | 0.0832 | - | | 0.4335 | 4950 | 0.0083 | - | | 0.4379 | 5000 | 0.0007 | - | | 0.4423 | 5050 | 0.0417 | - | | 0.4467 | 5100 | 0.0001 | - | | 0.4510 | 5150 | 0.0218 | - | | 0.4554 | 5200 | 0.0001 | - | | 0.4598 | 5250 | 0.0012 | - | | 0.4642 | 5300 | 0.0002 | - | | 0.4686 | 5350 | 0.0006 | - | | 0.4729 | 5400 | 0.0223 | - | | 0.4773 | 5450 | 0.0612 | - | | 0.4817 | 5500 | 0.0004 | - | | 0.4861 | 5550 | 0.0 | - | | 0.4905 | 5600 | 0.0007 | - | | 0.4948 | 5650 | 0.0007 | - | | 0.4992 | 5700 | 0.0116 | - | | 0.5036 | 5750 | 0.0262 | - | | 0.5080 | 5800 | 0.0336 | - | | 0.5123 | 5850 | 0.026 | - | | 0.5167 | 5900 | 0.0004 | - | | 0.5211 | 5950 | 0.0001 | - | | 0.5255 | 6000 | 0.0001 | - | | 0.5299 | 6050 | 0.0001 | - | | 0.5342 | 6100 | 0.0029 | - | | 0.5386 | 6150 | 0.0001 | - | | 0.5430 | 6200 | 0.0699 | - | | 0.5474 | 6250 | 0.0262 | - | | 0.5518 | 6300 | 0.0269 | - | | 0.5561 | 6350 | 0.0002 | - | | 0.5605 | 6400 | 0.0666 | - | | 0.5649 | 6450 | 0.0209 | - | | 0.5693 | 6500 | 0.0003 | - | | 0.5737 | 6550 | 0.0001 | - | | 0.5780 | 6600 | 0.0115 | - | | 0.5824 | 6650 | 0.0003 | - | | 0.5868 | 6700 | 0.0001 | - | | 0.5912 | 6750 | 0.0056 | - | | 0.5956 | 6800 | 0.0603 | - | | 0.5999 | 6850 | 0.0002 | - | | 0.6043 | 6900 | 0.0003 | - | | 0.6087 | 6950 | 0.0092 | - | | 0.6131 | 7000 | 0.0562 | - | | 0.6174 | 7050 | 0.0408 | - | | 0.6218 | 7100 | 0.0001 | - | | 0.6262 | 7150 | 0.0035 | - | | 0.6306 | 7200 | 0.0337 | - | | 0.6350 | 7250 | 0.0024 | - | | 0.6393 | 7300 | 0.0005 | - | | 0.6437 | 7350 | 0.0001 | - | | 0.6481 | 7400 | 0.0 | - | | 0.6525 | 7450 | 0.0001 | - | | 0.6569 | 7500 | 0.0002 | - | | 0.6612 | 7550 | 0.0004 | - | | 0.6656 | 7600 | 0.0125 | - | | 0.6700 | 7650 | 0.0005 | - | | 0.6744 | 7700 | 0.0157 | - | | 0.6788 | 7750 | 0.0055 | - | | 0.6831 | 7800 | 0.0 | - | | 0.6875 | 7850 | 0.0053 | - | | 0.6919 | 7900 | 0.0 | - | | 0.6963 | 7950 | 0.0002 | - | | 0.7006 | 8000 | 0.0002 | - | | 0.7050 | 8050 | 0.0001 | - | | 0.7094 | 8100 | 0.0001 | - | | 0.7138 | 8150 | 0.0001 | - | | 0.7182 | 8200 | 0.0007 | - | | 0.7225 | 8250 | 0.0002 | - | | 0.7269 | 8300 | 0.0001 | - | | 0.7313 | 8350 | 0.0 | - | | 0.7357 | 8400 | 0.0156 | - | | 0.7401 | 8450 | 0.0098 | - | | 0.7444 | 8500 | 0.0 | - | | 0.7488 | 8550 | 0.0001 | - | | 0.7532 | 8600 | 0.0042 | - | | 0.7576 | 8650 | 0.0 | - | | 0.7620 | 8700 | 0.0 | - | | 0.7663 | 8750 | 0.0056 | - | | 0.7707 | 8800 | 0.0 | - | | 0.7751 | 8850 | 0.0 | - | | 0.7795 | 8900 | 0.013 | - | | 0.7839 | 8950 | 0.0 | - | | 0.7882 | 9000 | 0.0001 | - | | 0.7926 | 9050 | 0.0 | - | | 0.7970 | 9100 | 0.0 | - | | 0.8014 | 9150 | 0.0 | - | | 0.8057 | 9200 | 0.0 | - | | 0.8101 | 9250 | 0.0 | - | | 0.8145 | 9300 | 0.0007 | - | | 0.8189 | 9350 | 0.0 | - | | 0.8233 | 9400 | 0.0002 | - | | 0.8276 | 9450 | 0.0 | - | | 0.8320 | 9500 | 0.0 | - | | 0.8364 | 9550 | 0.0089 | - | | 0.8408 | 9600 | 0.0001 | - | | 0.8452 | 9650 | 0.0 | - | | 0.8495 | 9700 | 0.0 | - | | 0.8539 | 9750 | 0.0 | - | | 0.8583 | 9800 | 0.0565 | - | | 0.8627 | 9850 | 0.0161 | - | | 0.8671 | 9900 | 0.0 | - | | 0.8714 | 9950 | 0.0246 | - | | 0.8758 | 10000 | 0.0 | - | | 0.8802 | 10050 | 0.0 | - | | 0.8846 | 10100 | 0.012 | - | | 0.8889 | 10150 | 0.0 | - | | 0.8933 | 10200 | 0.0 | - | | 0.8977 | 10250 | 0.0 | - | | 0.9021 | 10300 | 0.0 | - | | 0.9065 | 10350 | 0.0 | - | | 0.9108 | 10400 | 0.0 | - | | 0.9152 | 10450 | 0.0 | - | | 0.9196 | 10500 | 0.0 | - | | 0.9240 | 10550 | 0.0023 | - | | 0.9284 | 10600 | 0.0 | - | | 0.9327 | 10650 | 0.0006 | - | | 0.9371 | 10700 | 0.0 | - | | 0.9415 | 10750 | 0.0 | - | | 0.9459 | 10800 | 0.0 | - | | 0.9503 | 10850 | 0.0 | - | | 0.9546 | 10900 | 0.0 | - | | 0.9590 | 10950 | 0.0243 | - | | 0.9634 | 11000 | 0.0107 | - | | 0.9678 | 11050 | 0.0001 | - | | 0.9721 | 11100 | 0.0 | - | | 0.9765 | 11150 | 0.0 | - | | 0.9809 | 11200 | 0.0274 | - | | 0.9853 | 11250 | 0.0 | - | | 0.9897 | 11300 | 0.0 | - | | 0.9940 | 11350 | 0.0 | - | | 0.9984 | 11400 | 0.0 | - | | 0.0007 | 1 | 0.2021 | - | | 0.0329 | 50 | 0.1003 | - | | 0.0657 | 100 | 0.2282 | - | | 0.0986 | 150 | 0.0507 | - | | 0.1314 | 200 | 0.046 | - | | 0.1643 | 250 | 0.0001 | - | | 0.1971 | 300 | 0.0495 | - | | 0.2300 | 350 | 0.0031 | - | | 0.2628 | 400 | 0.0004 | - | | 0.2957 | 450 | 0.0002 | - | | 0.3285 | 500 | 0.0 | - | | 0.3614 | 550 | 0.0 | - | | 0.3942 | 600 | 0.0 | - | | 0.4271 | 650 | 0.0001 | - | | 0.4599 | 700 | 0.0 | - | | 0.4928 | 750 | 0.0 | - | | 0.5256 | 800 | 0.0 | - | | 0.5585 | 850 | 0.0 | - | | 0.5913 | 900 | 0.0001 | - | | 0.6242 | 950 | 0.0 | - | | 0.6570 | 1000 | 0.0001 | - | | 0.6899 | 1050 | 0.0 | - | | 0.7227 | 1100 | 0.0 | - | | 0.7556 | 1150 | 0.0 | - | | 0.7884 | 1200 | 0.0 | - | | 0.8213 | 1250 | 0.0 | - | | 0.8541 | 1300 | 0.0 | - | | 0.8870 | 1350 | 0.0 | - | | 0.9198 | 1400 | 0.0 | - | | 0.9527 | 1450 | 0.0001 | - | | 0.9855 | 1500 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.15.0 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
Anjaan-Khadka/Nepali-Summarization
Anjaan-Khadka
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "mT5", "ne", "dataset:csebuetnlp/xlsum", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,677,152,698,000
2023-03-17T08:45:04
21
0
--- datasets: - csebuetnlp/xlsum language: - ne tags: - summarization - mT5 widget: - text: तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ । model-index: - name: Anjaan-Khadka/summarization_nepali results: - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: test metrics: - type: rouge value: 36.5002 name: ROUGE-1 verified: false --- # adaptation of mT5-multilingual-XLSum for Nepali Lnaguage This repository contains adapted version of mT5-multilinguag-XLSum for Single Language (Nepali). View original [mT5-multilinguag-XLSum model](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM article_text = " तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ ।" model_name = "Anjaan-Khadka/summarization_nepali" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( (article_text), return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ```
[ "SUMMARIZATION" ]
Non_BioNLP
sndsabin/fake-news-classifier
sndsabin
null
[ "license:gpl-3.0", "region:us" ]
1,648,716,829,000
2022-04-07T08:58:17
0
0
--- license: gpl-3.0 --- **Fake News Classifier**: Text classification model to detect fake news articles! **Dataset**: [Kaggle Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF
TheBloke
text-generation
[ "transformers", "gguf", "solar", "finetune", "dpo", "Instruct", "augmentation", "german", "text-generation", "en", "de", "dataset:argilla/distilabel-math-preference-dpo", "base_model:fblgit/LUNA-SOLARkrautLM-Instruct", "base_model:quantized:fblgit/LUNA-SOLARkrautLM-Instruct", "license:cc-by-nc-4.0", "region:us", "conversational" ]
1,703,336,543,000
2023-12-23T13:08:59
368
4
--- base_model: fblgit/LUNA-SOLARkrautLM-Instruct datasets: - argilla/distilabel-math-preference-dpo language: - en - de library_name: transformers license: cc-by-nc-4.0 model_name: Luna SOLARkrautLM Instruct pipeline_tag: text-generation tags: - finetune - dpo - Instruct - augmentation - german inference: false model_creator: FBL model_type: solar prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Luna SOLARkrautLM Instruct - GGUF - Model creator: [FBL](https://huggingface.co/fblgit) - Original model: [Luna SOLARkrautLM Instruct](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct) <!-- description start --> ## Description This repo contains GGUF format model files for [FBL's Luna SOLARkrautLM Instruct](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF) * [FBL's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [luna-solarkrautlm-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q2_K.gguf) | Q2_K | 2 | 4.55 GB| 7.05 GB | smallest, significant quality loss - not recommended for most purposes | | [luna-solarkrautlm-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 4.66 GB| 7.16 GB | very small, high quality loss | | [luna-solarkrautlm-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 5.19 GB| 7.69 GB | very small, high quality loss | | [luna-solarkrautlm-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 5.65 GB| 8.15 GB | small, substantial quality loss | | [luna-solarkrautlm-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q4_0.gguf) | Q4_0 | 4 | 6.07 GB| 8.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [luna-solarkrautlm-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 6.10 GB| 8.60 GB | small, greater quality loss | | [luna-solarkrautlm-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 6.46 GB| 8.96 GB | medium, balanced quality - recommended | | [luna-solarkrautlm-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q5_0.gguf) | Q5_0 | 5 | 7.40 GB| 9.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [luna-solarkrautlm-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 7.40 GB| 9.90 GB | large, low quality loss - recommended | | [luna-solarkrautlm-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 7.60 GB| 10.10 GB | large, very low quality loss - recommended | | [luna-solarkrautlm-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q6_K.gguf) | Q6_K | 6 | 8.81 GB| 11.31 GB | very large, extremely low quality loss | | [luna-solarkrautlm-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF/blob/main/luna-solarkrautlm-instruct.Q8_0.gguf) | Q8_0 | 8 | 11.40 GB| 13.90 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF and below it, a specific filename to download, such as: luna-solarkrautlm-instruct.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF luna-solarkrautlm-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF luna-solarkrautlm-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m luna-solarkrautlm-instruct.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./luna-solarkrautlm-instruct.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./luna-solarkrautlm-instruct.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: FBL's Luna SOLARkrautLM Instruct ![Juanako.AI & SauerkrautLM Productions](https://vago-solutions.de/wp-content/uploads/2023/12/sauerkrautlm-solar.png "LUNA-SOLARkrautLM-Instruct") ## VAGO solutions LUNA-SOLARkrautLM-Instruct Introducing **LUNA-SOLARkrautLM-Instruct** – a UNA-Sauerkraut version of the powerful [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) ! Aligned with **DPO** and tamed with **UNA**. # Table of Contents 1. [Overview of all LUNA-SOLARkrautLM-Instruct models](#all-sauerkrautlm-solar-instruct-models) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training Dataset](#training-dataset) - [Data Contamination Test](#data-contamination-test-results) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## Model Details **LUNA-SOLARkrautLM-Instruct** - **Model Type:** LUNA-SOLARkrautLM-Instruct is a UNA Model based on [fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) and the powerful set of [SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct/) - **Language(s):** English, German - **License:** cc-by-nc-4.0 - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) [Juanako.AI - UNA](mailto:[email protected]) ### Training Dataset: LUNA-SOLARkrautLM-Instruct was trained with mix of German data augmentation and translated data. Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** (Our dataset do not contain any TruthfulQA prompts - check Data Contamination Test Results) and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).** We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. We improved the German language skills on this model. Nevertheless, certain formulations may occur that are not entirely correct. ### Data Contamination Test Results Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in. We checked our SauerkrautLM-DPO dataset with a special test [1] on this model as target model and upstage/SOLAR-10.7B-Instruct-v1.0 as reference model. The HuggingFace team used the same methods [2, 3]. Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination. *The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.* | Dataset | ARC | MMLU | TruthfulQA | GSM8K | |------------------------------|-------|-------|-------|-------| | **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 | [1] https://github.com/swj0419/detect-pretrain-code-contamination [2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06 [3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230 ### Prompt Template: ``` <|im_start|>system Du bist LUNA-SOLARkrautLM, ein großes Sprachmodell, das höflich und kompetent antwortet.<|im_end|> <|im_start|>user Wie geht es dir?<|im_end|> <|im_start|>assistant ``` ``` ### User: Hello, how are you? ### Assistant: Hi there! I am an AI language model, so I don't have personal feelings or emotions in the traditional sense. However, I can assure you that my systems and processes are functioning well at this moment, allowing me to provide helpful responses for your queries. How may I assist you today? ``` ## Evaluation ``` hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric |Value | |Stderr| |-----|-------|----------|-----:|-----------|-----:|---|-----:| |gsm8k|Yaml |get-answer| 5|exact_match|0.6467|± |0.0132| hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (64) | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |--------------|-------|------|-----:|------|-----:|---|-----:| |truthfulqa_mc2|Yaml |none | 0|acc |0.7368|± |0.0149| hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 25, batch_size: auto (32) | Tasks |Version|Filter|n-shot| Metric |Value| |Stderr| |-------------|-------|------|-----:|--------|----:|---|-----:| |arc_challenge|Yaml |none | 25|acc |0.692|± |0.0135| | | |none | 25|acc_norm|0.715|± |0.0132| hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (64) | Tasks |Version|Filter|n-shot|Metric| Value | |Stderr| |-----------|-------|------|-----:|------|------:|---|-----:| |paws_de |Yaml |none | 0|acc | 0.3965|± |0.0109| |wmt16-en-de|Yaml |none | 0|bleu | 3.5784|± |0.1325| | | |none | 0|ter |64.5707|± |0.4514| | | |none | 0|chrf |45.7068|± |0.3861| |xnli_de |Yaml |none | 0|acc | 0.4129|± |0.0099| hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 10, batch_size: auto (32) | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------|-------|------|-----:|--------|-----:|---|-----:| |hellaswag|Yaml |none | 10|acc |0.7131|± |0.0045| | | |none | 10|acc_norm|0.8815|± |0.0032| hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto (64) | Tasks |Version|Filter|n-shot|Metric| Value | |Stderr| |-----------|-------|------|-----:|------|------:|---|-----:| |wmt16-de-en|Yaml |none | 5|bleu |14.9310|± |0.8014| | | |none | 5|ter |46.3206|± |0.4087| | | |none | 5|chrf |60.8637|± |0.4436| |wmt16-en-de|Yaml |none | 5|bleu | 6.2016|± |0.2918| | | |none | 5|ter |63.9997|± |0.4591| | | |none | 5|chrf |51.1399|± |0.3978| |xnli_de |Yaml |none | 5|acc | 0.4703|± |0.0100| hf (pretrained=fblgit/LUNA-SOLARkrautLM-Instruct,dtype=float16), gen_kwargs: (), limit: None, num_fewshot: 5, batch_size: auto (16) | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |---------------------------------------|-------|------|-----:|------|-----:|---|-----:| |mmlu |N/A |none | 0|acc |0.6461|± |0.1215| | - humanities |N/A |none | 5|acc |0.5960|± |0.1200| | - formal_logic |Yaml |none | 5|acc |0.4683|± |0.0446| | - high_school_european_history |Yaml |none | 5|acc |0.8121|± |0.0305| | - high_school_us_history |Yaml |none | 5|acc |0.8480|± |0.0252| | - high_school_world_history |Yaml |none | 5|acc |0.8312|± |0.0244| | - international_law |Yaml |none | 5|acc |0.7851|± |0.0375| | - jurisprudence |Yaml |none | 5|acc |0.7685|± |0.0408| | - logical_fallacies |Yaml |none | 5|acc |0.7423|± |0.0344| | - moral_disputes |Yaml |none | 5|acc |0.7283|± |0.0239| | - moral_scenarios |Yaml |none | 5|acc |0.3899|± |0.0163| | - philosophy |Yaml |none | 5|acc |0.7074|± |0.0258| | - prehistory |Yaml |none | 5|acc |0.7716|± |0.0234| | - professional_law |Yaml |none | 5|acc |0.4824|± |0.0128| | - world_religions |Yaml |none | 5|acc |0.7661|± |0.0325| | - other |N/A |none | 5|acc |0.7097|± |0.0900| | - business_ethics |Yaml |none | 5|acc |0.7700|± |0.0423| | - clinical_knowledge |Yaml |none | 5|acc |0.6792|± |0.0287| | - college_medicine |Yaml |none | 5|acc |0.6647|± |0.0360| | - global_facts |Yaml |none | 5|acc |0.3600|± |0.0482| | - human_aging |Yaml |none | 5|acc |0.6861|± |0.0311| | - management |Yaml |none | 5|acc |0.8350|± |0.0368| | - marketing |Yaml |none | 5|acc |0.8504|± |0.0234| | - medical_genetics |Yaml |none | 5|acc |0.6700|± |0.0473| | - miscellaneous |Yaml |none | 5|acc |0.7893|± |0.0146| | - nutrition |Yaml |none | 5|acc |0.7549|± |0.0246| | - professional_accounting |Yaml |none | 5|acc |0.5213|± |0.0298| | - professional_medicine |Yaml |none | 5|acc |0.7353|± |0.0268| | - virology |Yaml |none | 5|acc |0.5783|± |0.0384| | - social_sciences |N/A |none | 5|acc |0.7501|± |0.0684| | - econometrics |Yaml |none | 5|acc |0.5175|± |0.0470| | - high_school_geography |Yaml |none | 5|acc |0.8485|± |0.0255| | - high_school_government_and_politics|Yaml |none | 5|acc |0.8912|± |0.0225| | - high_school_macroeconomics |Yaml |none | 5|acc |0.6615|± |0.0240| | - high_school_microeconomics |Yaml |none | 5|acc |0.7311|± |0.0288| | - high_school_psychology |Yaml |none | 5|acc |0.8385|± |0.0158| | - human_sexuality |Yaml |none | 5|acc |0.7023|± |0.0401| | - professional_psychology |Yaml |none | 5|acc |0.6683|± |0.0190| | - public_relations |Yaml |none | 5|acc |0.6909|± |0.0443| | - security_studies |Yaml |none | 5|acc |0.7633|± |0.0272| | - sociology |Yaml |none | 5|acc |0.8358|± |0.0262| | - us_foreign_policy |Yaml |none | 5|acc |0.8800|± |0.0327| | - stem |N/A |none | 5|acc |0.5569|± |0.1360| | - abstract_algebra |Yaml |none | 5|acc |0.3800|± |0.0488| | - anatomy |Yaml |none | 5|acc |0.6148|± |0.0420| | - astronomy |Yaml |none | 5|acc |0.7237|± |0.0364| | - college_biology |Yaml |none | 5|acc |0.7708|± |0.0351| | - college_chemistry |Yaml |none | 5|acc |0.4600|± |0.0501| | - college_computer_science |Yaml |none | 5|acc |0.5400|± |0.0501| | - college_mathematics |Yaml |none | 5|acc |0.2700|± |0.0446| | - college_physics |Yaml |none | 5|acc |0.3333|± |0.0469| | - computer_security |Yaml |none | 5|acc |0.7300|± |0.0446| | - conceptual_physics |Yaml |none | 5|acc |0.6213|± |0.0317| | - electrical_engineering |Yaml |none | 5|acc |0.6276|± |0.0403| | - elementary_mathematics |Yaml |none | 5|acc |0.4788|± |0.0257| | - high_school_biology |Yaml |none | 5|acc |0.8065|± |0.0225| | - high_school_chemistry |Yaml |none | 5|acc |0.5123|± |0.0352| | - high_school_computer_science |Yaml |none | 5|acc |0.7000|± |0.0461| | - high_school_mathematics |Yaml |none | 5|acc |0.3889|± |0.0297| | - high_school_physics |Yaml |none | 5|acc |0.3576|± |0.0391| | - high_school_statistics |Yaml |none | 5|acc |0.5926|± |0.0335| | - machine_learning |Yaml |none | 5|acc |0.4554|± |0.0473| | Groups |Version|Filter|n-shot|Metric|Value | |Stderr| |------------------|-------|------|-----:|------|-----:|---|-----:| |mmlu |N/A |none | 0|acc |0.6461|± |0.1215| | - humanities |N/A |none | 5|acc |0.5960|± |0.1200| | - other |N/A |none | 5|acc |0.7097|± |0.0900| | - social_sciences|N/A |none | 5|acc |0.7501|± |0.0684| | - stem |N/A |none | 5|acc |0.5569|± |0.1360| ``` ### MT-Bench ``` ########## Average ########## score model gpt-4 8.990625 gpt-3.5-turbo 7.943750 claude-instant-v1 7.905660 claude-v1 7.900000 UNA-SOLAR-10.7B-Instruct-v1.0 7.521875 LUNA-SOLARkrautLM-Instruct 7.462500 vicuna-33b-v1.3 7.121875 wizardlm-30b 7.009375 Llama-2-70b-chat 6.856250 Llama-2-13b-chat 6.650000 guanaco-33b 6.528125 tulu-30b 6.434375 guanaco-65b 6.409375 oasst-sft-7-llama-30b 6.409375 palm-2-chat-bison-001 6.400000 mpt-30b-chat 6.393750 vicuna-13b-v1.3 6.387500 wizardlm-13b 6.353125 Llama-2-7b-chat 6.268750 vicuna-7b-v1.3 5.996875 baize-v2-13b 5.750000 nous-hermes-13b 5.553459 mpt-7b-chat 5.459119 gpt4all-13b-snoozy 5.452830 koala-13b 5.350000 mpt-30b-instruct 5.218750 falcon-40b-instruct 5.168750 h2ogpt-oasst-open-llama-13b 4.625000 alpaca-13b 4.531250 chatglm-6b 4.500000 oasst-sft-4-pythia-12b 4.318750 rwkv-4-raven-14b 3.984375 dolly-v2-12b 3.275000 fastchat-t5-3b 3.040625 stablelm-tuned-alpha-7b 2.753125 llama-13b 2.606250 ``` ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions. ## Collaborations We are also keenly seeking support and investment for our startup, [VAGO Solutions](https://huggingface.co/VAGOsolutions), where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us. [Juanako.AI](https://huggingface.co/fblgit) is also seeking support and investment for our startup, we also are open for collaborating with other labs to make awesome models like this one. ## Acknowledgement Big Hug to [VAGO Solutions](https://huggingface.co/VAGOsolutions), we merely used our UNA transformers library on their code and dataset, nothing else. This won't be possible without them, thanks! Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to [upstage](https://huggingface.co/upstage) for providing the open source community with their latest technology! <!-- original-model-card end -->
[ "TRANSLATION" ]
Non_BioNLP
halee9/translation_en_ko
halee9
text2text-generation
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-ko-en", "base_model:finetune:Helsinki-NLP/opus-mt-ko-en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,710,610,256,000
2024-03-16T22:43:22
128
0
--- base_model: Helsinki-NLP/opus-mt-ko-en license: apache-2.0 metrics: - bleu tags: - generated_from_trainer model-index: - name: translation_en_ko results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # translation_en_ko This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4074 - Bleu: 30.5108 - Gen Len: 42.414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.5644 | 1.0 | 7500 | 1.4721 | 29.3866 | 42.268 | | 1.3933 | 2.0 | 15000 | 1.4074 | 30.5108 | 42.414 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "TRANSLATION" ]
Non_BioNLP
lamm-mit/Cephalo-Idefics-2-vision-10b-beta
lamm-mit
image-text-to-text
[ "transformers", "safetensors", "idefics2", "image-text-to-text", "nlp", "code", "vision", "chemistry", "engineering", "biology", "bio-inspired", "text-generation-inference", "materials science", "conversational", "multilingual", "arxiv:2405.19076", "license:apache-2.0", "endpoints_compatible", "region:us" ]
1,716,909,925,000
2024-05-30T10:34:41
12
0
--- language: - multilingual library_name: transformers license: apache-2.0 pipeline_tag: image-text-to-text tags: - nlp - code - vision - chemistry - engineering - biology - bio-inspired - text-generation-inference - materials science inference: parameters: temperature: 0.3 widget: - messages: - role: user content: <|image_1|>Can you describe what you see in the image? --- ## Model Summary Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks. A novel aspect of Cephalo's development is the innovative dataset generation method. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training. Cephalo can interpret complex visual scenes and generating contextually accurate language descriptions and answer queries. The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation. The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/kl5GWBP9WS0D4uwd1t3S7.png) Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods. This version of Cephalo, lamm-mit/Cephalo-Idefics-2-vision-10b-beta, is based on a merged expansion of the https://huggingface.co/lamm-mit/Cephalo-Idefics-2-vision-8b-beta and the HuggingFaceM4/idefics2-8b-chatty model. This method allows us to increase the depth of the model and focus on learning more complex representations and associations in deeper layers of the network. The lamm-mit/Cephalo-Idefics-2-vision-10b-beta model is trained for two epochs, while the lamm-mit/Cephalo-Idefics-2-vision-10b-alpha version was trained for one epoch. The model was trained in several stages: **Step 1**: Train https://huggingface.co/lamm-mit/Cephalo-Idefics-2-vision-8b-beta by fine-tuning the HuggingFaceM4/idefics2-8b-chatty model. **Step 2**: Combine the https://huggingface.co/lamm-mit/Cephalo-Idefics-2-vision-8b-beta decoder with the last 8 layers of the HuggingFaceM4/idefics2-8b-chatty decoder. **Step 3**: Fine-tune the merged model, which now has 40 decoder layers and a total of 10b parameters. The model was trained on a combination of scientific text-image data extracted from Wikipedia and scientific papers. For further details on the base model, see: https://huggingface.co/HuggingFaceM4/idefics2-8b-chatty. More details about technical aspects of the model, training and example applications to materials science problems are provided in the paper (reference at the bottom). ### Chat Format The lamm-mit/Cephalo-Idefics-2-vision-10b-beta model is suitable for one or more image inputs, wih prompts using the chat format as follows: ```raw User: You carefully study the image, and respond accurately, but succinctly. Think step-by-step. <image>What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI.<end_of_utterance> Assistant: ``` where the model generates the text after `Assistant:` . For multi-turn conversations, the prompt should be formatted as follows: ```raw User: You carefully study the image, and respond accurately, but succinctly. Think step-by-step. <image>What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI.<end_of_utterance> Assistant: The image depicts ants climbing a vertical surface using their legs and claws. This behavior is observed in nature and can inspire the design of multi-agent AI systems that mimic the coordinated movement of these insects. The relevance lies in the potential application of such systems in robotics and materials science, where efficient and adaptive movement is crucial.<end_of_utterance> User: How could this be used to design a fracture resistant material?<end_of_utterance> Assistant: ``` If you need to manually set the chat template: ``` IDEFICS2_CHAT_TEMPLATE = "{% for message in messages %}{{message['role'].capitalize()}}{% if message['content'][0]['type'] == 'image' %}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if line['type'] == 'text' %}{{line['text']}}{% elif line['type'] == 'image' %}{{ '<image>' }}{% endif %}{% endfor %}<end_of_utterance>\n{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}" ``` ### Sample inference code This code snippets show how to get quickly started on a GPU: ```python from PIL import Image import requests DEVICE='cuda:0' from transformers import AutoProcessor, Idefics2ForConditionalGeneration from tqdm.notebook import tqdm model_id='lamm-mit/Cephalo-Idefics-2-vision-10b-beta' model = Idefics2ForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, #if your GPU allows _attn_implementation="flash_attention_2", #make sure Flash Attention 2 is installed trust_remote_code=True, ).to (DEVICE) processor = AutoProcessor.from_pretrained( f"{model_id}", do_image_splitting=True ) ``` See section towards the end for more comments on model optimization, including quantization. If you need to manually set the chat template: ```python IDEFICS2_CHAT_TEMPLATE = "{% for message in messages %}{{message['role'].capitalize()}}{% if message['content'][0]['type'] == 'image' %}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if line['type'] == 'text' %}{{line['text']}}{% elif line['type'] == 'image' %}{{ '<image>' }}{% endif %}{% endfor %}<end_of_utterance>\n{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}" tokenizer = AutoTokenizer.from_pretrained(base_model_id, use_fast=True) tokenizer.chat_template = IDEFICS2_CHAT_TEMPLATE processor.tokenizer = tokenizer ``` Simple inference example: ``` from transformers.image_utils import load_image image = load_image("https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg") # Create inputs messages = [ { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI."}, ] }, ] prompt = processor.apply_chat_template(messages, add_generation_prompt=True) # Get inputs using the processor inputs = processor(text=prompt, images=[image], return_tensors="pt") inputs = {k: v.to(DEVICE) for k, v in inputs.items()} # Generate generated_ids = model.generate(**inputs, max_new_tokens=500) generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_texts) ``` Next we provide a convenience function for inference. This function takes the model, processor, question, and images, along with messages and images objects for repeated chat-like interactions with the model. ```python def ask_about_image (model, processor, question, images_input=[], verbatim=False, temperature=0.1, show_image=False, system="You are a biomaterials scientist who responds accurately. ", init_instr = "", show_conversation=True, max_new_tokens=256, messages=[], images=[], use_Markdown=False, ): query = question images_input=ensure_list(images_input) if len (images)==0: if len (images_input)>0: for image in tqdm (images_input) : if is_url(image): image= load_image(image) images.append (image) if show_image: display ( image ) if len (messages)==0: base_message = { "role": "user", "content": [ {"type": "text", "text": system + init_instr}, # Image messages will be added dynamically here {"type": "text", "text": query} ] } # Ensure the images_input is a list images_input = ensure_list(images_input) # Add image messages dynamically image_messages = [{"type": "image"} for _ in images_input] base_message["content"][1:1] = image_messages # Insert image messages before the last text message # Append the constructed message to messages list messages.append(base_message) else: messages.append ( { "role": "user", "content": [ {"type": "text", "text": query } ] } ) if verbatim: print (messages) text = processor.apply_chat_template(messages, add_generation_prompt=True) inputs = processor(text=[text.strip()], images=images, return_tensors="pt", padding=True).to(DEVICE) generated_ids = model.generate(**inputs, max_new_tokens=max_new_tokens, temperature=temperature, do_sample=True) generated_texts = processor.batch_decode(generated_ids[:, inputs["input_ids"].size(1):], skip_special_tokens=True) messages.append ( { "role": "assistant", "content": [ {"type": "text", "text": generated_texts[0]}, ] } ) formatted_conversation = format_conversation(messages, images) # Display the formatted conversation, e.g. in Jupyter Notebook if show_conversation: if use_Markdown: display(Markdown(formatted_conversation)) else: display(HTML(formatted_conversation)) return generated_texts, messages, images question = "What is shown in this image, and what is the relevance for materials design? Include a discussion of multi-agent AI." url1 = "https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg" response, messages,images= ask_about_image ( model, processor, question, images_input=[url1,], temperature=0.1, system= '', init_instr='You carefully study the image and provide detailed answers. Think step-by-step.\n\n', show_conversation=True, max_new_tokens=512, messages=[], images=[]) ``` Sample output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/5n6oRNHrfwHkBX0QertZp.png) <small>Image by [Vaishakh Manohar](https://www.quantamagazine.org/the-simple-algorithm-that-ants-use-to-build-bridges-20180226/)</small> <pre style="white-space: pre-wrap;"> The image shows a group of ants moving in coordinated patterns on a surface. This illustrates the concept of multi-agent AI, which involves the study and simulation of complex systems involving multiple agents (in this case, ants) interacting with each other and their environment. The relevance for materials design is in understanding how these natural systems exhibit emergent behaviors such as self-organization, which can inspire the development of new materials and systems that mimic these natural processes. By studying the movement patterns of ants, researchers can gain insights into how to design materials that exhibit similar emergent properties, leading to improved performance in various applications. Multi-agent AI involves creating models that describe the interactions between individual agents and their environment, allowing for the simulation of complex systems with multiple interacting components. This approach can be applied to various fields, including materials science, where understanding emergent behaviors at the microscopic level can lead to the design of new materials with enhanced properties. </pre> ## Dataset generation The schematic below shows a visualization of the approach to generate datasets for training the vision model. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training. The image below shows reproductions of two representative pages of the scientific article (here, Spivak, Buehler, et al., 2011), and how they are used to extract visual scientific data for training the Cephalo model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/qHURSBRWEDgHy4o56escN.png) # Further model optimizations If your GPU allows, load and run inference in half precision (`torch.float16` or `torch.bfloat16`). ```diff model = AutoModelForVision2Seq.from_pretrained( "lamm-mit/Cephalo-Idefics-2-vision-10b-beta", + torch_dtype=torch.float16, ).to(DEVICE) ``` **Vision encoder efficiency** Given the high resolution supported, the vision part of the model can be memory hungry depending on your configuration. If you are GPU-memory-constrained, you can: - **deactivate the image splitting.** To do so, add `do_image_splitting=False` when initializing the processor (`AutoProcessor.from_pretrained`). There are no changes required on the model side. Note that only the sft model has been trained with image splitting. - **decrease the maximum image resolution.** To do so, add `size= {"longest_edge": 448, "shortest_edge": 378}` when initializing the processor (`AutoProcessor.from_pretrained`). In particular, the `longest_edge` value can be adapted to fit the need (the default value is `980`). We recommend using values that are multiples of 14. There are no changes required on the model side. `do_image_splitting=True` is especially needed to boost performance on complex tasks where a very large image is used as input. The model was fine-tuned with image splitting turned on. For simple tasks, this argument can be safely set to `False`. **Using Flash-attention 2 to speed up generation** <details><summary>Click to expand.</summary> Mke sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) for the package installation. Simply change the snippet above with: ```diff model = AutoModelForVision2Seq.from_pretrained( "lamm-mit/Cephalo-Idefics-2-vision-10b-beta", + torch_dtype=torch.bfloat16, + _attn_implementation="flash_attention_2", ).to(DEVICE) ``` </details> **4 bit quantization with bitsandbytes** <details><summary>Click to expand.</summary> It is possible to load Cephalo-Idefics-2-vision-10b-beta in 4bits with `bitsandbytes`. Make sure that you have `accelerate` and `bitsandbytes` installed. ```diff + from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForVision2Seq.from_pretrained( "lamm-mit/Cephalo-Idefics-2-vision-10b-beta", + torch_dtype=torch.bfloat16, + quantization_config=quantization_config, ).to(DEVICE) ``` </details> ## Citation Please cite as: ```bibtex @article{Buehler_Cephalo_2024, title={Cephalo: Multi-Modal Vision-Language Models for Bio-Inspired Materials Analysis and Design}, author={Markus J. Buehler}, journal={arXiv preprint arXiv:2405.19076}, year={2024} } ```
[ "QUESTION_ANSWERING" ]
Non_BioNLP
gauravkoradiya/T5-Finetuned-Summarization-DialogueDataset
gauravkoradiya
summarization
[ "transformers", "pytorch", "t5", "text2text-generation", "summarization", "en", "dataset:knkarthick/dialogsum", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,681,607,546,000
2023-04-16T01:24:14
151
1
--- datasets: - knkarthick/dialogsum language: - en library_name: transformers license: apache-2.0 metrics: - bleu - rouge pipeline_tag: summarization ---
[ "SUMMARIZATION" ]
Non_BioNLP
MaLA-LM/lucky52-bloom-7b1-no-5
MaLA-LM
text-generation
[ "transformers", "pytorch", "bloom", "text-generation", "generation", "question answering", "instruction tuning", "multilingual", "dataset:MBZUAI/Bactrian-X", "arxiv:2404.04850", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,712,217,803,000
2024-12-10T09:07:41
14
0
--- datasets: - MBZUAI/Bactrian-X language: - multilingual library_name: transformers license: cc-by-nc-4.0 pipeline_tag: text-generation tags: - generation - question answering - instruction tuning --- ### Model Description This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages. We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks. Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details. * Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1) * Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani * Instruction language codes: en, zh, af, ar, az * Training method: full-parameter fine-tuning. ### Usage The model checkpoint should be loaded using `transformers` library. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-5") model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-5") ``` ### Citation ``` @inproceedings{ji2025lucky52, title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM}, author={Shaoxiong Ji and Pinzhen Chen}, year={2025}, booktitle={Proceedings of COLING}, url={https://arxiv.org/abs/2404.04850}, } ```
[ "QUESTION_ANSWERING" ]
Non_BioNLP
RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us" ]
1,721,885,085,000
2024-07-25T11:07:58
26
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) airoboros-l2-13b-3.0 - GGUF - Model creator: https://huggingface.co/jondurbin/ - Original model: https://huggingface.co/jondurbin/airoboros-l2-13b-3.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [airoboros-l2-13b-3.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q2_K.gguf) | Q2_K | 4.52GB | | [airoboros-l2-13b-3.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.IQ3_XS.gguf) | IQ3_XS | 4.99GB | | [airoboros-l2-13b-3.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.IQ3_S.gguf) | IQ3_S | 5.27GB | | [airoboros-l2-13b-3.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q3_K_S.gguf) | Q3_K_S | 5.27GB | | [airoboros-l2-13b-3.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.IQ3_M.gguf) | IQ3_M | 5.57GB | | [airoboros-l2-13b-3.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q3_K.gguf) | Q3_K | 5.9GB | | [airoboros-l2-13b-3.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q3_K_M.gguf) | Q3_K_M | 5.9GB | | [airoboros-l2-13b-3.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q3_K_L.gguf) | Q3_K_L | 6.45GB | | [airoboros-l2-13b-3.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.IQ4_XS.gguf) | IQ4_XS | 6.54GB | | [airoboros-l2-13b-3.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q4_0.gguf) | Q4_0 | 6.86GB | | [airoboros-l2-13b-3.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.IQ4_NL.gguf) | IQ4_NL | 6.9GB | | [airoboros-l2-13b-3.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q4_K_S.gguf) | Q4_K_S | 6.91GB | | [airoboros-l2-13b-3.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q4_K.gguf) | Q4_K | 7.33GB | | [airoboros-l2-13b-3.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q4_K_M.gguf) | Q4_K_M | 7.33GB | | [airoboros-l2-13b-3.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q4_1.gguf) | Q4_1 | 7.61GB | | [airoboros-l2-13b-3.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q5_0.gguf) | Q5_0 | 8.36GB | | [airoboros-l2-13b-3.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q5_K_S.gguf) | Q5_K_S | 8.36GB | | [airoboros-l2-13b-3.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q5_K.gguf) | Q5_K | 8.6GB | | [airoboros-l2-13b-3.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q5_K_M.gguf) | Q5_K_M | 8.6GB | | [airoboros-l2-13b-3.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q5_1.gguf) | Q5_1 | 9.1GB | | [airoboros-l2-13b-3.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q6_K.gguf) | Q6_K | 9.95GB | | [airoboros-l2-13b-3.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf/blob/main/airoboros-l2-13b-3.0.Q8_0.gguf) | Q8_0 | 12.88GB | Original model description: --- license: llama2 datasets: - jondurbin/airoboros-3.0 --- ### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) #### IMPORTANT NOTE - llama-2 chat format!!! The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant! This is a breaking change, although most inference systems support llama-2 chat templating. See prompt format below for all of the details. #### Highlights A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras: * MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations. * Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience. This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format is llama-2 chat. ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt} [/INST] ``` For multi-turn, the prompt format is as follows: ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST] ``` Here is a simple python script that can help generate the prompt: ```python def get_chat_prompt(message, chat_history, system_prompt): texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n'] do_strip = False for user_input, response in chat_history: user_input = user_input.strip() if do_strip else user_input do_strip = True texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ') message = message.strip() if do_strip else message texts.append(f'{message} [/INST] ') return ''.join(texts) def get_prompt(item): """ `item` is expected to be in chatml format, e.g. { "id": 1, "conversations": [ { "role": "system", "content": "You are a helpful, unbiased, uncensored assistant." }, { "role": "user", "content": "Hi!" }, { "role": "assistant", "content": "Hello, how are you today?" }, { "role": "user", "content": "Just peachy, and you?" } ] } """ system_prompt = share_gpt_item['conversations'][0]['value'] offset = 1 if share_gpt_item['conversations'][0]['role'] != 'system': system_prompt = 'You are an unbiased, uncensored, helpful assistant.' offset = 0 chat_history = [ (share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content']) for idx in range(offset, len(item['conversations']) - 1, 2) ] message = item['conversations'][-1]['content'] return get_chat_prompt(message, chat_history, system_prompt) ``` ### Fine tuning info https://wandb.ai/jondurbin/airoboros-l2-13b-3.0/runs/97utxyk9?workspace=user-jondurbin ### Helpful usage tips #### MathJSON Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/ I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py) __Use a very low temperature!__ Example: ``` [INST] <<SYS>> You are an assistant with vast knowledge in all things math. <</SYS> Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST] ``` Output: ``` The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1. Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr². Solution as MathJSON: <mathjson> [ "Multiply", "Pi", [ "Power", 17.2456, 2 ] ] </mathjson> ``` You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response. #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions. The models with `-l2` in the name have a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The models with `-m-` are mistral-7b (apache 2.0) The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
Non_BioNLP
chienweichang/formatted_address
chienweichang
text2text-generation
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "dataset:cwchang/tw_address_large", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,702,956,972,000
2023-12-19T04:49:04
92
0
--- base_model: google/mt5-small datasets: - cwchang/tw_address_large license: apache-2.0 metrics: - rouge tags: - generated_from_trainer model-index: - name: formatted_address results: - task: type: summarization name: Summarization dataset: name: cwchang/tw_address_large type: cwchang/tw_address_large metrics: - type: rouge value: 97.0 name: Rouge1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # formatted_address This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the cwchang/tw_address_large dataset. It achieves the following results on the evaluation set: - Loss: 0.1388 - Rouge1: 97.0 - Rouge2: 48.3471 - Rougel: 96.996 - Rougelsum: 96.9932 - Gen Len: 13.7152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "SUMMARIZATION" ]
Non_BioNLP
am-azadi/gte-multilingual-base_Fine_Tuned_1e
am-azadi
sentence-similarity
[ "sentence-transformers", "safetensors", "new", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:25743", "loss:MultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:Alibaba-NLP/gte-multilingual-base", "base_model:finetune:Alibaba-NLP/gte-multilingual-base", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,740,074,285,000
2025-02-20T17:58:47
11
0
--- base_model: Alibaba-NLP/gte-multilingual-base library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:25743 - loss:MultipleNegativesRankingLoss widget: - source_sentence: م الحين SHIA WAVES ENGLISH Indians throw thousands of idols on the street for not protecting them against the virus |Many people in India have thrown away statues of gods and blamed why the gods with infinite power cannot protect them from the ravages of the coronavirus? All sects must be repositioned, otherwise there will be a bigger crisis in each sect. . . I don't know, when will it be our country's turn? you say? sentences: - Esta mulher sofreu uma convulsão após ser vacinada contra a covid-19 na Argentina - Images of Hindu idols destroyed for not protecting Indian people during the Covid-19 pandemic - Forces raid a house in Indian-administered Kashmir - source_sentence: 'En el mismo cuerpo legal atacaremos la raíz del problema: los jefes de las mafias. Tipificaremos el nuevo delito de “autoría por dominio de organización”. Es decir: los jefes de las bandas pagarán también por los delitos que ordenen cometer a sus cómplices.' sentences: - Walmart va demander une preuve de vaccination à ses clients canadiens - Vídeo mostra fraude de mortes na pandemia de Covid-19 - La autoría por dominio de organización sería un nuevo delito en Ecuador - source_sentence: Winning sentences: - President Donald Trump has 232 electoral votes, Joe Biden has 212, 226 or 227. - Suspected drunk drivers automatically face one month in jail under new law in Thailand? - Le bilan des violences post-électorales à M'Batto a atteint au moins une trentaine de morts - source_sentence: Pablo Iglesias Iglesias_ No soy partidario de la violencia pero disfrutaría viendo como matan a tiros a los líderes del PP. La derecha debe ser exterminada como un virus. 11:26 AM 24 ene. 12 1.682 Retweets 2.069 Likes 27 go sentences: - Pablo Iglesias tuiteó que disfrutaría de ver como matan de un tiro a líderes del PP y a la derecha española habría que exterminarla como a un virus - Delfines en un puerto de España durante el confinamiento - Jenazah korban virus corona di Rusia - source_sentence: 'ليس داعشياً من بيده المسدس ..انه جندي فرنسي ينفذ اعدامات بحق مواطنين عزل في الجزائر !!! لم يكن حينها لا تنظيم قاعدة ولا دولة اسلامية ولا نصرة ليلصقوا بهم منفردين تهمة الارهاب !! انتم ام واب واخ وابن وجد الارهاب .. Not Daashaa of the pistol in his hand .. he''s a French soldier executions carried out against unarmed civilians in Algeria !!! If not then it does not regulate not base an Islamic state nor a victory for Alsqoa their individual terrorism charge !! You are a mother and father and brother and the son of terror found .. Non Daashaa du pistolet dans sa main .. Il est un soldat français exécutions menées contre des civils non armés en Algérie !!! Si non, alors il ne réglemente pas pas fonder un Etat islamique, ni une victoire pour Alsqoa leur charge individuelle du terrorisme !! Vous êtes une mère et père et le frère et le fils de la terreur trouvé .. # occupant' sentences: - Massacre perpétré par des soldats français en Algérie - Video Of Attack On UP Minister Shrikant Sharma - Map shows there are no wildfires in Canada and Mexico --- # SentenceTransformer based on Alibaba-NLP/gte-multilingual-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision ca1791e0bcc104f6db161f27de1340241b13c5a4 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ "ليس داعشياً من بيده المسدس ..انه جندي فرنسي ينفذ اعدامات بحق مواطنين عزل في الجزائر !!! لم يكن حينها لا تنظيم قاعدة ولا دولة اسلامية ولا نصرة ليلصقوا بهم منفردين تهمة الارهاب !! انتم ام واب واخ وابن وجد الارهاب .. Not Daashaa of the pistol in his hand .. he's a French soldier executions carried out against unarmed civilians in Algeria !!! If not then it does not regulate not base an Islamic state nor a victory for Alsqoa their individual terrorism charge !! You are a mother and father and brother and the son of terror found .. Non Daashaa du pistolet dans sa main .. Il est un soldat français exécutions menées contre des civils non armés en Algérie !!! Si non, alors il ne réglemente pas pas fonder un Etat islamique, ni une victoire pour Alsqoa leur charge individuelle du terrorisme !! Vous êtes une mère et père et le frère et le fils de la terreur trouvé .. # occupant", 'Massacre perpétré par des soldats français en Algérie', 'Video Of Attack On UP Minister Shrikant Sharma', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 25,743 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 2 tokens</li><li>mean: 140.38 tokens</li><li>max: 2514 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 20.49 tokens</li><li>max: 141 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------|:-----------------| | <code>Olhem aí a mineradora da Noruega destruindo o meio ambiente na Amazônia. Lula vendeu o solo para a Noruega em documento secreto. Ela arrecada 2 bilhoes ao ano e devolve 180 milhoes para consertar o estrago que ela mesmo faz na Amazônia.</code> | <code>O ex-presidente Lula vendeu o solo da Amazônia para uma empresa norueguesa</code> | <code>1.0</code> | | <code>EL CONGRESO DANIE Cometió una burrada Al aprobar en primera votación con 113 votos a favor, 5 en contra y una abstención, que la vacuna contra el coronavirus sea de manera OBLIGATORIA para todos Que les pasa a estos genios de la política, acaso no saben que están violando leyes universales de Derechos Humanos¿Qué les pasa a estos congresistas?. . ¿ Acaso desconocen y pisotean las leyes internacionales que respaldan los Derechos Humanos Universales ???. . Absolutamente nadie puede ser obligado a vacunarse. . Igualmente, ningún procedimiento médico puede hacerse sin el consentimiento del paciente. . No lo digo yo, lo dice la UNESCO,la Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura.... Que en sus normativas explican lo siguiente : . SOLO UNO MISMO TIENE EL CONTROL DE SU PROPIO CUERPO, nadie tiene el control de nuestro cuerpo más que uno mismo, nadie puede intervenir en nuestro cuerpo bajo ninguna circunstancia sin nuestro consentimiento. . Legalmente bajo t...</code> | <code>En Perú el Congreso aprobó que la vacuna contra el covid-19 sea obligatoria</code> | <code>1.0</code> | | <code>Why changes to Legislation is so difficult. Debating PTSD in Emergency Services Debating Mental Health Stigma Debating Workers Compensation Debating Cancer Legislation for Firefighters Debating MP's Pay Debating PFAS Contamination Debating Suicide Figures in Australia Debating MP's AllowancesThis tells us everything we need to know about this Government’s priorities.</code> | <code>Accurate description of photos showing the difference in attendance in various parliamentary sessions in Australia</code> | <code>1.0</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 1 - `per_device_eval_batch_size`: 1 - `num_train_epochs`: 1 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 1 - `per_device_eval_batch_size`: 1 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:-----:|:-------------:| | 0.0194 | 500 | 0.0 | | 0.0388 | 1000 | 0.0 | | 0.0583 | 1500 | 0.0 | | 0.0777 | 2000 | 0.0 | | 0.0971 | 2500 | 0.0 | | 0.1165 | 3000 | 0.0 | | 0.1360 | 3500 | 0.0 | | 0.1554 | 4000 | 0.0 | | 0.1748 | 4500 | 0.0 | | 0.1942 | 5000 | 0.0 | | 0.2137 | 5500 | 0.0 | | 0.2331 | 6000 | 0.0 | | 0.2525 | 6500 | 0.0 | | 0.2719 | 7000 | 0.0 | | 0.2913 | 7500 | 0.0 | | 0.3108 | 8000 | 0.0 | | 0.3302 | 8500 | 0.0 | | 0.3496 | 9000 | 0.0 | | 0.3690 | 9500 | 0.0 | | 0.3885 | 10000 | 0.0 | | 0.4079 | 10500 | 0.0 | | 0.4273 | 11000 | 0.0 | | 0.4467 | 11500 | 0.0 | | 0.4661 | 12000 | 0.0 | | 0.4856 | 12500 | 0.0 | | 0.5050 | 13000 | 0.0 | | 0.5244 | 13500 | 0.0 | | 0.5438 | 14000 | 0.0 | | 0.5633 | 14500 | 0.0 | | 0.5827 | 15000 | 0.0 | | 0.6021 | 15500 | 0.0 | | 0.6215 | 16000 | 0.0 | | 0.6410 | 16500 | 0.0 | | 0.6604 | 17000 | 0.0 | | 0.6798 | 17500 | 0.0 | | 0.6992 | 18000 | 0.0 | | 0.7186 | 18500 | 0.0 | | 0.7381 | 19000 | 0.0 | | 0.7575 | 19500 | 0.0 | | 0.7769 | 20000 | 0.0 | | 0.7963 | 20500 | 0.0 | | 0.8158 | 21000 | 0.0 | | 0.8352 | 21500 | 0.0 | | 0.8546 | 22000 | 0.0 | | 0.8740 | 22500 | 0.0 | | 0.8934 | 23000 | 0.0 | | 0.9129 | 23500 | 0.0 | | 0.9323 | 24000 | 0.0 | | 0.9517 | 24500 | 0.0 | | 0.9711 | 25000 | 0.0 | | 0.9906 | 25500 | 0.0 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.3.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
rifatul123/Primary_doctor_v1
rifatul123
text-generation
[ "adapter-transformers", "pytorch", "gpt2", "biology", "medical", "chemistry", "text-generation-inference", "text-generation", "en", "region:us" ]
1,683,275,744,000
2023-05-05T16:57:39
0
0
--- language: - en library_name: adapter-transformers metrics: - accuracy pipeline_tag: text-generation tags: - biology - medical - chemistry - text-generation-inference --- ![Screenshot 2023-05-05 092541.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/9gMBxc270uN8agP8n6-5m.png) ![Screenshot 2023-05-05 094102.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/kqOUgU2wyxLDP1gKnCKPC.png) ![Screenshot 2023-05-05 094303.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/WpNXVBwbLCNNvWJ65dJI8.png) ![Screenshot 2023-05-05 094409.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/HZ1YdlwfZAi8CPlvrcqDr.png) ![Screenshot 2023-05-05 094542.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/h9EJw9fRNMBpOwJVVw6zI.png) # Model Card for Model ID This modelcard describes a fine-tuned GPT-2 language model for medical research using a personally collected dataset. The model is intended for text generation in the medical research domain. ## Model Details This modelcard describes a fine-tuned GPT-2 language model for medical research using a personally collected dataset. The model is intended for text generation in the medical research domain. ### Model Description The model has been fine-tuned on a GPT-2 architecture and trained with a task-specific parameter for text generation. The do_sample parameter is set to true, which means that the model can generate text on its own rather than simply copying from the input. The max_length parameter is set to 50, which means that the maximum length of the generated text will be 50 tokens. - **Developed by:** [OpenAI] - **Shared by [optional]:** [More Information Needed] - **Model type:** [Language Model] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [GPT-2] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ### Direct Use This model can be used for text generation in the medical research domain. It can be used to generate text for a variety of purposes, such as research papers, reports, and summaries. ### Downstream Use [optional] The model can be fine-tuned for downstream tasks such as summarization, question answering, and text classification. ### Out-of-Scope Use This model may not perform as well on text outside the medical research domain. It is important to carefully evaluate the generated text to ensure that it is appropriate for the intended use. ## Bias, Risks, and Limitations This modelcard acknowledges that all language models have limitations and potential biases. The model may produce biased or inaccurate outputs if the input data contains bias or if the training data is not diverse enough. The risks of using the model include the possibility of generating misleading or harmful information. ### Recommendations To mitigate potential risks and limitations, users of the model should carefully evaluate the generated text and consider the following recommendations: 1)Evaluate the input data for potential bias and ensure that it is diverse and representative. 2)Consider fine-tuning the model on additional data to improve its accuracy and reduce the risk of bias. 3)Review and edit the generated text before use to ensure that it is appropriate for the intended purpose. 4)Provide clear and transparent documentation of the model's limitations and potential biases to users and stakeholders. ## How to Get Started with the Model To use the model, load it in your preferred programming language using the transformers library, and pass in the input text. The model will generate text based on the input, using the task-specific parameters that have been set.
[ "TEXT_CLASSIFICATION", "QUESTION_ANSWERING", "SUMMARIZATION" ]
BioNLP
Helsinki-NLP/opus-mt-yo-fr
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "yo", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,646,263,744,000
2023-08-16T12:09:04
57
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-yo-fr * source languages: yo * target languages: fr * OPUS readme: [yo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.yo.fr | 24.1 | 0.408 |
[ "TRANSLATION" ]
Non_BioNLP
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
7