Dataset Viewer
Auto-converted to Parquet
id
stringlengths
11
95
author
stringlengths
3
36
task_category
stringclasses
16 values
tags
sequencelengths
1
4.05k
created_time
int64
1.65k
1.74k
last_modified
int64
1.62k
1.74k
downloads
int64
0
15.6M
likes
int64
0
4.86k
README
stringlengths
246
1.01M
matched_task
sequencelengths
1
8
matched_bigbio_names
sequencelengths
1
8
is_bionlp
stringclasses
3 values
Goodmotion/spam-mail-classifier
Goodmotion
text-classification
[ "transformers", "safetensors", "text-classification", "spam-detection", "license:apache-2.0", "endpoints_compatible", "region:us" ]
1,733
1,733
87
2
--- license: apache-2.0 tags: - transformers - text-classification - spam-detection --- # SPAM Mail Classifier This model is fine-tuned from `microsoft/Multilingual-MiniLM-L12-H384` to classify email subjects as SPAM or NOSPAM. ## Model Details - **Base model**: `microsoft/Multilingual-MiniLM-L12-H384` - **Fine-tuned for**: Text classification - **Number of classes**: 2 (SPAM, NOSPAM) - **Languages**: Multilingual ## Usage This model is fine-tuned from `microsoft/Multilingual-MiniLM-L12-H384` to classify email subjects as SPAM or NOSPAM. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification model_name = "Goodmotion/spam-mail-classifier" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained( model_name ) text = "Félicitations ! Vous avez gagné un iPhone." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) print(outputs.logits) ``` ### Exemple for list ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification model_name = "Goodmotion/spam-mail-classifier" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) texts = [ 'Join us for a webinar on AI innovations', 'Urgent: Verify your account immediately.', 'Meeting rescheduled to 3 PM', 'Happy Birthday!', 'Limited time offer: Act now!', 'Join us for a webinar on AI innovations', 'Claim your free prize now!', 'You have unclaimed rewards waiting!', 'Weekly newsletter from Tech World', 'Update on the project status', 'Lunch tomorrow at 12:30?', 'Get rich quick with this amazing opportunity!', 'Invoice for your recent purchase', 'Don\'t forget: Gym session at 6 AM', 'Join us for a webinar on AI innovations', 'bonjour comment allez vous ?', 'Documents suite à notre rendez-vous', 'Valentin Dupond mentioned you in a comment', 'Bolt x Supabase = 🤯', 'Modification site web de la société', 'Image de mise en avant sur les articles', 'Bring new visitors to your site', 'Le Cloud Éthique sans bullshit', 'Remix Newsletter #25: React Router v7', 'Votre essai auprès de X va bientôt prendre fin', 'Introducing a Google Docs integration, styles and more in Claude.ai', 'Carte de crédit sur le point d’expirer sur Cloudflare' ] inputs = tokenizer(texts, padding=True, truncation=True, max_length=128, return_tensors="pt") outputs = model(**inputs) # Convertir les logits en probabilités avec softmax logits = outputs.logits probabilities = torch.softmax(logits, dim=1) # Décoder les classes pour chaque texte labels = ["NOSPAM", "SPAM"] # Mapping des indices à des labels results = [ {"text": text, "label": labels[torch.argmax(prob).item()], "confidence": prob.max().item()} for text, prob in zip(texts, probabilities) ] # Afficher les résultats for result in results: print(f"Texte : {result['text']}") print(f"Résultat : {result['label']} (Confiance : {result['confidence']:.2%})\n") ```
[ "TEXT_CLASSIFICATION" ]
[ "ESSAI" ]
Non_BioNLP
knowledgator/gliner-poly-small-v1.0
knowledgator
token-classification
[ "gliner", "pytorch", "token-classification", "multilingual", "dataset:urchade/pile-mistral-v0.1", "dataset:numind/NuNER", "dataset:knowledgator/GLINER-multi-task-synthetic-data", "license:apache-2.0", "region:us" ]
1,724
1,724
32
14
--- datasets: - urchade/pile-mistral-v0.1 - numind/NuNER - knowledgator/GLINER-multi-task-synthetic-data language: - multilingual library_name: gliner license: apache-2.0 pipeline_tag: token-classification --- # About GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios. This particular version utilize bi-encoder architecture with post-fusion, where textual encoder is [DeBERTa v3 small](microsoft/deberta-v3-small) and entity label encoder is sentence transformer - [BGE-small-en](https://huggingface.co/BAAI/bge-small-en-v1.5). Such architecture brings several advantages over uni-encoder GLiNER: * An unlimited amount of entities can be recognized at a single time; * Faster inference if entity embeddings are preprocessed; * Better generalization to unseen entities; Post fusion strategy brings advantages over classical bi-encoder enabling better inter-label understanding. ### Installation & Usage Install or update the gliner package: ```bash pip install gliner -U ``` Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`. ```python from gliner import GLiNER model = GLiNER.from_pretrained("knowledgator/gliner-poly-small-v1.0") text = """ Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time. """ labels = ["person", "award", "date", "competitions", "teams"] entities = model.predict_entities(text, labels, threshold=0.25) for entity in entities: print(entity["text"], "=>", entity["label"]) ``` ``` Cristiano Ronaldo dos Santos Aveiro => person 5 February 1985 => date Al Nassr => teams Portugal national team => teams Ballon d'Or => award UEFA Men's Player of the Year Awards => award European Golden Shoes => award UEFA Champions Leagues => competitions UEFA European Championship => competitions UEFA Nations League => competitions Champions League => competitions European Championship => competitions ``` If you have a large amount of entities and want to pre-embed them, please, refer to the following code snippet: ```python labels = ["your entities"] texts = ["your texts"] entity_embeddings = model.encode_labels(labels, batch_size = 8) outputs = model.batch_predict_with_embeds([text], entity_embeddings, labels) ``` ### Benchmarks Below you can see the table with benchmarking results on various named entity recognition datasets: | Dataset | Score | |---------|-------| | ACE 2004 | 25.4% | | ACE 2005 | 27.2% | | AnatEM | 17.7% | | Broad Tweet Corpus | 70.2% | | CoNLL 2003 | 67.8% | | FabNER | 22.9% | | FindVehicle | 40.2% | | GENIA_NER | 47.7% | | HarveyNER | 15.5% | | MultiNERD | 64.5% | | Ontonotes | 28.7% | | PolyglotNER | 47.5% | | TweetNER7 | 39.3% | | WikiANN en | 56.7% | | WikiNeural | 80.0% | | bc2gm | 56.2% | | bc4chemd | 48.7% | | bc5cdr | 60.5% | | ncbi | 53.5% | | **Average** | **45.8%** | ||| | CrossNER_AI | 48.9% | | CrossNER_literature | 64.0% | | CrossNER_music | 68.7% | | CrossNER_politics | 69.0% | | CrossNER_science | 62.7% | | mit-movie | 40.3% | | mit-restaurant | 36.2% | | **Average (zero-shot benchmark)** | **55.7%** | ### Join Our Discord Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).
[ "NAMED_ENTITY_RECOGNITION" ]
[ "ANATEM", "BC5CDR" ]
Non_BioNLP
QuantFactory/meditron-7b-GGUF
QuantFactory
null
[ "gguf", "en", "dataset:epfl-llm/guidelines", "arxiv:2311.16079", "base_model:meta-llama/Llama-2-7b", "base_model:quantized:meta-llama/Llama-2-7b", "license:llama2", "endpoints_compatible", "region:us" ]
1,727
1,727
206
1
--- base_model: meta-llama/Llama-2-7b datasets: - epfl-llm/guidelines language: - en license: llama2 metrics: - accuracy - perplexity --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/meditron-7b-GGUF This is quantized version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) created using llama.cpp # Original Model Card <img width=50% src="meditron_LOGO.png" alt="Alt text" title="Meditron-logo"> # Model Card for Meditron-7B-v1.0 Meditron is a suite of open-source medical Large Language Models (LLMs). Meditron-7B is a 7 billion parameters model adapted to the medical domain from Llama-2-7B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a [new dataset](https://huggingface.co/datasets/epfl-llm/guidelines) of internationally-recognized medical guidelines, and general domain data from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). Meditron-7B, finetuned on relevant training data, outperforms Llama-2-7B and PMC-Llama on multiple medical reasoning tasks. <details open> <summary><strong>Advisory Notice</strong></summary> <blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;"> While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings. </blockquote> </details> ## Model Details - **Developed by:** [EPFL LLM Team](https://huggingface.co/epfl-llm) - **Model type:** Causal decoder-only transformer language model - **Language(s):** English (mainly) - **Model License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Code License:** [APACHE 2.0 LICENSE](LICENSE) - **Continue-pretrained from model:** [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b) - **Context length:** 2K tokens - **Input:** Text-only data - **Output:** Model generates text only - **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance. - **Knowledge Cutoff:** August 2023 ### Model Sources - **Repository:** [epflLLM/meditron](https://github.com/epfLLM/meditron) - **Trainer:** [epflLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) - **Paper:** *[MediTron-70B: Scaling Medical Pretraining for Large Language Models](https://arxiv.org/abs/2311.16079)* ## Uses Meditron-7B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to: - Medical exam question answering - Supporting differential diagnosis - Disease information (symptoms, cause, treatment) query - General health information query ### Direct Use It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. It should not be used directly for production or work that may impact people. ### Downstream Use Meditron-70B and Meditron-7B are both foundation models without finetuning or instruction-tuning. They can be finetuned, instruction-tuned, or RLHF-tuned for specific downstream tasks and applications. There are two ways we have used this model for downstream question-answering tasks. 1. We apply in-context learning with k demonstrations (3 or 5 in our paper) added to the prompt. 2. We finetuned the models for downstream question-answering tasks using specific training sets. We encourage and look forward to the adaption of the base model for more diverse applications. If you want a more interactive way to prompt the model, we recommend using a high-throughput and memory-efficient inference engine with a UI that supports chat and text generation. You can check out our deployment [guide](https://github.com/epfLLM/meditron/blob/main/deployment/README.md), where we used [FastChat](https://github.com/lm-sys/FastChat) with [vLLM](https://github.com/vllm-project/vllm). We collected generations for our qualitative analysis through an interactive UI platform, [BetterChatGPT](https://github.com/ztjhz/BetterChatGPT). Here is the prompt format we used as an example: <img width=70% src="prompt_example.png" alt="qualitative-analysis-prompt" title="Qualitative Analysis Prompt"> ### Out-of-Scope Use We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise. ## Truthfulness, Helpfulness, Risk, and Bias <!-- This section is meant to convey both technical and sociotechnical limitations. --> We did an initial assessment of Meditron models' **Truthfulness** against baseline models and consumer-level medical models. We use TruthfulQA (multiple choice) as the main evaluation benchmark. We only focus on the categories that are relevant to the medical domain, including Health, Nutrition, Psychology, and Science. For 7B models, we perform one-shot evaluations for consistent answer generation. For 70B models, the evaluations are under the zero-shot setting. Below, we report the detailed truthfulness performance of each category. | | | | | | | | | | --- | ------ |----- |----- |----- |----- |----- |----- | |Category | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b | |Health | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 | |Nutrition | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 | |Psychology| 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 | |Science | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 | |Avg | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 | | | | | | | | | For a more detailed performance analysis, please see our paper. Significant research is still required to fully explore potential bias, fairness, and safety issues with this language model. Please recognize that our evaluation on Meditron-7B's helpfulness, risk, and bias are highly limited. Thus, as we noted in the safety notice, we strongly against any deployment in medical applications without further alignment process and rigorous evaluation! ### Recommendations **IMPORTANT!** Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations. Understanding these limitations is especially important in a domain like medicine. Therefore, we strongly recommend against using this model in production for natural language generation or for professional purposes related to health and medicine. ## Training Details ### Training Data Meditron’s domain-adaptive pre-training corpus GAP-Replay combines 48.1B tokens from four corpora: - [**Clinical Guidelines**](https://huggingface.co/datasets/epfl-llm/guidelines): a new dataset of 46K internationally-recognized clinical practice guidelines from various healthcare-related sources, including hospitals and international organizations. - **Medical Paper Abstracts**: 16.1M abstracts extracted from closed-access PubMed and PubMed Central papers. - **Medical Papers**: full-text articles extracted from 5M publicly available PubMed and PubMed Central papers. - **Replay Data**: 400M tokens of general domain pretraining data sampled from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <img width=75% src="gap-replay.png" alt="Alt text" title="Meditron-logo"> #### Data Preprocessing Please see the detailed preprocessing procedure in our paper. ### Training Procedure We used the [Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) distributed training library, a derivative of Nvidia's Megatron LM project, to optimize training efficiency. Hardware consists of 1 node of 8x NVIDIA A100 (80GB) SXM GPUs connected by NVLink and NVSwitch with a single Nvidia ConnectX-6 DX network card and equipped with 2 x AMD EPYC 7543 32-Core Processors and 512 GB of RAM. Our three way parallelism scheme uses: - Data Parallelism (DP -- different GPUs process different subsets of the batches) of 2, - Pipeline Parallelism (PP -- different GPUs process different layers) of 4, - Tensor Parallelism (TP -- different GPUs process different subtensors for matrix multiplication) of 1. #### Training Hyperparameters | | | | --- | ------ | | bf16 | true | | lr | 3e-4 | | eps | 1e-5 | | betas | \[0.9, 0.95\] | | clip_grad | 1 | | weight decay | 0.1 | | DP size | 16 | | TP size | 4 | | PP size | 1 | | seq length | 2048 | | lr scheduler | cosine| | min lr | 1e-6 | | warmup iteration | 2000 | | micro batch size | 10 | | global batch size | 1600 | | | | #### Sizes The model was trained in September 2023. The model architecture is exactly Llama 2, meaning | | | | --- | ------ | | Model size | 7B | | Hidden dimension | 4096 | | Num. attention heads | 32 | | Num. layers | 32 | | | | ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. ### Results We finetune meditron-7b, llama-2-7b, pmc-llama-7b on each benchmark (pubmedqa, medmcqa, medqa)'s training data individually. We report the finetuned models' performance with top token selection as the inference mode. For MMLU-Medical, models finetuned on MedMCQA are used for inference. For MedQA-4-Option, models finetuned on MedQA are used for inference. For a more detailed performance analysis, please see our paper. | | | | | | | | --- | ------ |----- |----- |----- |----- | |Dataset | meditron-7b | llama-2-7b | pmc-llama-7b | Zephyr-7B-beta* | Mistral-7B-instruct* | |MMLU-Medical | 54.2 | 53.7 | 56.4 | 63.3 | 60.0 | |PubMedQA | 74.4 | 61.8 | 59.2 | 46.0 | 17.8 | |MedMCQA | 59.2 | 54.4 | 57.6 | 43.0 | 40.2 | |MedQA | 47.9 | 44.0 | 42.4 | 42.8 | 32.4 | |MedQA-4-Option| 52.0 | 49.6 | 49.2 | 48.5 | 41.1 | |Avg | 57.5 | 52.7 | 53.0 | 48.7 | 38.3 | | | | | | | | **Note**: models with * are already instruction-tuned, so we exclude them from further finetuning on any training data. ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 8 x NVIDIA A100 (80GB) SXM - **Total GPU hours:** 588.8 - **Hardware Provider:** EPFL Research Computing Platform - **Compute Region:** Switzerland - **Carbon Emitted:** Switzerland has a carbon efficiency of 0.016 kgCO2/kWh (https://www.carbonfootprint.com/docs/2018_8_electricity_factors_august_2018_-_online_sources.pdf). 73.6 hours of 8 A100s means 588.8 hours at a TDP of 400W. Assuming a Power Usage effectiveness of 1.5, total emissions are estimated to be: (400W / 1000W/kWh / GPU * 0.016 kgCO2/kWh * 73.6 h * 8 GPU) * 1.8 PUE = 6.8 kgCO2. ## Citation **BibTeX:** If you use Meditron or its training data, please cite our work: ``` @misc{chen2023meditron70b, title={MEDITRON-70B: Scaling Medical Pretraining for Large Language Models}, author={Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, year={2023}, eprint={2311.16079}, archivePrefix={arXiv}, primaryClass={cs.CL} } @software{epfmedtrn, author = {Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, title = {MediTron-70B: Scaling Medical Pretraining for Large Language Models}, month = November, year = 2023, url = {https://github.com/epfLLM/meditron} } ```
[ "QUESTION_ANSWERING" ]
[ "MEDQA", "PUBMEDQA" ]
BioNLP
m42-health/Llama3-Med42-8B
m42-health
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "m42", "health", "healthcare", "clinical-llm", "conversational", "en", "arxiv:2408.06142", "license:llama3", "autotrain_compatible", "text-generation-inference", "region:us" ]
1,719
1,724
1,966
62
--- language: - en license: llama3 license_name: llama3 pipeline_tag: text-generation tags: - m42 - health - healthcare - clinical-llm inference: false --- # **Med42-v2 - A Suite of Clinically-aligned Large Language Models** Med42-v2 is a suite of open-access clinical large language models (LLM) instruct and preference-tuned by M42 to expand access to medical knowledge. Built off LLaMA-3 and comprising either 8 or 70 billion parameters, these generative AI systems provide high-quality answers to medical questions. ## Key performance metrics: - Med42-v2-70B outperforms GPT-4.0 in most of the MCQA tasks. - Med42-v2-70B achieves a MedQA zero-shot performance of 79.10, surpassing the prior state-of-the-art among all openly available medical LLMs. - Med42-v2-70B sits at the top of the Clinical Elo Rating Leaderboard. |Models|Elo Score| |:---:|:---:| |**Med42-v2-70B**| 1764 | |Llama3-70B-Instruct| 1643 | |GPT4-o| 1426 | |Llama3-8B-Instruct| 1352 | |Mixtral-8x7b-Instruct| 970 | |**Med42-v2-8B**| 924 | |OpenBioLLM-70B| 657 | |JSL-MedLlama-3-8B-v2.0| 447 | ## Limitations & Safe Use - The Med42-v2 suite of models is not ready for real clinical use. Extensive human evaluation is undergoing as it is required to ensure safety. - Potential for generating incorrect or harmful information. - Risk of perpetuating biases in training data. Use this suite of models responsibly! Do not rely on them for medical usage without rigorous safety testing. ## Model Details *Disclaimer: This large language model is not yet ready for clinical use without further testing and validation. It should not be relied upon for making medical decisions or providing patient care.* Beginning with Llama3 models, Med42-v2 were instruction-tuned using a dataset of ~1B tokens compiled from different open-access and high-quality sources, including medical flashcards, exam questions, and open-domain dialogues. **Model Developers:** M42 Health AI Team **Finetuned from model:** Llama3 - 8B & 70B Instruct **Context length:** 8k tokens **Input:** Text only data **Output:** Model generates text only **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance the model's performance. **License:** Llama 3 Community License Agreement **Research Paper:** [Med42-v2: A Suite of Clinical LLMs](https://huggingface.co/papers/2408.06142) ## Intended Use The Med42-v2 suite of models is being made available for further testing and assessment as AI assistants to enhance clinical decision-making and access to LLMs for healthcare use. Potential use cases include: - Medical question answering - Patient record summarization - Aiding medical diagnosis - General health Q&A **Run the model** You can use the 🤗 Transformers library `text-generation` pipeline to do inference. ```python import transformers import torch model_name_or_path = "m42-health/Llama3-Med42-8B" pipeline = transformers.pipeline( "text-generation", model=model_name_or_path, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ { "role": "system", "content": ( "You are a helpful, respectful and honest medical assistant. You are a second version of Med42 developed by the AI team at M42, UAE. " "Always answer as helpfully as possible, while being safe. " "Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. " "Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. " "If you don’t know the answer to a question, please don’t share false information." ), }, {"role": "user", "content": "What are the symptoms of diabetes?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=False ) stop_tokens = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"), ] outputs = pipeline( prompt, max_new_tokens=512, eos_token_id=stop_tokens, do_sample=True, temperature=0.4, top_k=150, top_p=0.75, ) print(outputs[0]["generated_text"][len(prompt) :]) ``` ## Hardware and Software The training was conducted on the NVIDIA DGX cluster with H100 GPUs, utilizing PyTorch's Fully Sharded Data Parallel (FSDP) framework. ## Evaluation Results ### Open-ended question generation To ensure a robust evaluation of our model's output quality, we employ the LLM-as-a-Judge approach using Prometheus-8x7b-v2.0. Our assessment uses 4,000 carefully curated publicly accessible healthcare-related questions, generating responses from various models. We then use Prometheus to conduct pairwise comparisons of the answers. Drawing inspiration from the LMSYS Chatbot-Arena methodology, we present the results as Elo ratings for each model. To maintain fairness and eliminate potential bias from prompt engineering, we used the same simple system prompt for every model throughout the evaluation process. Below is the scoring rubric we used to prompt Prometheus to select the best answer: ``` ### Score Rubric: Which response is of higher overall quality in a medical context? Consider: * Relevance: Does it directly address the question? * Completeness: Does it cover all important aspects, details and subpoints? * Safety: Does it avoid unsafe practices and address potential risks? * Ethics: Does it maintain confidentiality and avoid biases? * Clarity: Is it professional, clear and easy to understand? ``` #### Elo Ratings |Models|Elo Score| |:---:|:---:| |**Med42-v2-70B**| 1764 | |Llama3-70B-Instruct| 1643 | |GPT4-o| 1426 | |Llama3-8B-Instruct| 1352 | |Mixtral-8x7b-Instruct| 970 | |**Med42-v2-8B**| 924 | |OpenBioLLM-70B| 657 | |JSL-MedLlama-3-8B-v2.0| 447 | #### Win-rate ![plot](./pairwise_model_comparison.svg) ### MCQA Evaluation Med42-v2 improves performance on every clinical benchmark compared to our previous version, including MedQA, MedMCQA, USMLE, MMLU clinical topics and MMLU Pro clinical subset. For all evaluations reported so far, we use [EleutherAI's evaluation harness library](https://github.com/EleutherAI/lm-evaluation-harness) and report zero-shot accuracies (except otherwise stated). We integrated chat templates into harness and computed the likelihood for the full answer instead of only the tokens "a.", "b.", "c." or "d.". |Model|MMLU Pro|MMLU|MedMCQA|MedQA|USMLE| |---:|:---:|:---:|:---:|:---:|:---:| |**Med42v2-70B**|64.36|87.12|73.20|79.10|83.80| |**Med42v2-8B**|54.30|75.76|61.34|62.84|67.04| |OpenBioLLM-70B|64.24|90.40|73.18|76.90|79.01| |GPT-4.0<sup>&dagger;</sup>|-|87.00|69.50|78.90|84.05| |MedGemini*|-|-|-|84.00|-| |Med-PaLM-2 (5-shot)*|-|87.77|71.30|79.70|-| |Med42|-|76.72|60.90|61.50|71.85| |ClinicalCamel-70B|-|69.75|47.00|53.40|54.30| |GPT-3.5<sup>&dagger;</sup>|-|66.63|50.10|50.80|53.00| |Llama3-8B-Instruct|48.24|72.89|59.65|61.64|60.38| |Llama3-70B-Instruct|64.24|85.99|72.03|78.88|83.57| **For MedGemini, results are reported for MedQA without self-training and without search. We note that 0-shot performance is not reported for Med-PaLM 2. Further details can be found at [https://github.com/m42health/med42](https://github.com/m42health/med42)*. <sup>&dagger;</sup> *Results as reported in the paper [Capabilities of GPT-4 on Medical Challenge Problems](https://www.microsoft.com/en-us/research/uploads/prod/2023/03/GPT-4_medical_benchmarks.pdf)*. ## Accessing Med42 and Reporting Issues Please report any software "bug" or other problems through one of the following means: - Reporting issues with the model: [https://github.com/m42health/med42](https://github.com/m42health/med42) - Reporting risky content generated by the model, bugs and/or any security concerns: [https://forms.office.com/r/fPY4Ksecgf](https://forms.office.com/r/fPY4Ksecgf) - M42’s privacy policy available at [https://m42.ae/privacy-policy/](https://m42.ae/privacy-policy/) - Reporting violations of the Acceptable Use Policy or unlicensed uses of Med42: <[email protected]> ## Acknowledgements We thank the Torch FSDP team for their robust distributed training framework, the EleutherAI harness team for their valuable evaluation tools, and the Hugging Face Alignment team for their contributions to responsible AI development. ## Citation ``` @misc{med42v2, Author = {Cl{\'e}ment Christophe and Praveen K Kanithi and Tathagata Raha and Shadab Khan and Marco AF Pimentel}, Title = {Med42-v2: A Suite of Clinical LLMs}, Year = {2024}, Eprint = {arXiv:2408.06142}, url={https://arxiv.org/abs/2408.06142}, } ```
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
[ "MEDQA" ]
BioNLP
seongil-dn/bge-m3-756
seongil-dn
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1138596", "loss:CachedGISTEmbedLoss", "arxiv:1908.10084", "base_model:seongil-dn/unsupervised_20m_3800", "base_model:finetune:seongil-dn/unsupervised_20m_3800", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,741
1,741
12
0
--- base_model: seongil-dn/unsupervised_20m_3800 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1138596 - loss:CachedGISTEmbedLoss widget: - source_sentence: How many people were reported to have died in the Great Fire of London in 1666? sentences: - City of London 1666. Both of these fires were referred to as "the" Great Fire. After the fire of 1666, a number of plans were drawn up to remodel the City and its street pattern into a renaissance-style city with planned urban blocks, squares and boulevards. These plans were almost entirely not taken up, and the medieval street pattern re-emerged almost intact. By the late 16th century, London increasingly became a major centre for banking, international trade and commerce. The Royal Exchange was founded in 1565 by Sir Thomas Gresham as a centre of commerce for London's merchants, and gained Royal patronage in - Great Atlanta fire of 1917 Great Atlanta fire of 1917 The Great Atlanta Fire of 1917 began just after noon on 21 May 1917 in the Old Fourth Ward of Atlanta, Georgia. It is unclear just how the fire started, but it was fueled by hot temperatures and strong winds which propelled the fire. The fire, which burned for nearly 10 hours, destroyed and 1,900 structures displacing over 10,000 people. Damages were estimated at $5 million, ($ million when adjusted for inflation). It was a clear, warm and sunny day with a brisk breeze from the south. This was not the only fire of the - Great Plague of London they had ever been seen ...". Plague cases continued to occur sporadically at a modest rate until the summer of 1666. On the second and third of September that year, the Great Fire of London destroyed much of the City of London, and some people believed that the fire put an end to the epidemic. However, it is now thought that the plague had largely subsided before the fire took place. In fact, most of the later cases of plague were found in the suburbs, and it was the City of London itself that was destroyed by the Fire. According - Monument to the Great Fire of London Monument to the Great Fire of London The Monument to the Great Fire of London, more commonly known simply as the Monument, is a Doric column in London, United Kingdom, situated near the northern end of London Bridge. Commemorating the Great Fire of London, it stands at the junction of Monument Street and Fish Street Hill, in height and 202 feet west of the spot in Pudding Lane where the Great Fire started on 2 September 1666. Constructed between 1671 and 1677, it was built on the site of St. Margaret's, Fish Street, the first church to be destroyed by - 'How to Have Sex in an Epidemic New York City government and organizations within the LGBT community. The Gay Men''s Health Crisis offered to buy all 5,000 pamphlets and promote them, with the condition that any mentions of the multifactorial model be removed from the writing. The authors refused. Berkowitz recounts in an interview it being "infuriating" that in 1985, the city still hadn''t adopted any standard safe sex education. The advent of safe sex in urban gay male populations came too late for many people: by 1983, more than 1,476 people had died from AIDS and David France estimated that as much as half of all' - 'Monument to the Great Fire of London six years to complete the 202 ft column. It was two more years before the inscription (which had been left to Wren — or to Wren''s choice — to decide upon) was set in place. "Commemorating — with a brazen disregard for the truth — the fact that ''London rises again...three short years complete that which was considered the work of ages.''" Hooke''s surviving drawings show that several versions of the monument were submitted for consideration: a plain obelisk, a column garnished with tongues of fire, and the fluted Doric column that was eventually chosen. The real contention came with' - source_sentence: '"The Claude Francois song ""Comme d''habitude"" (translation ""as usual"") was a hit in English for Frank Sinatra under what title?"' sentences: - Young at Heart (Frank Sinatra song) young, Dick Van Dyke recorded a duet with his wife, Arlene, at Capital Records Studio in Los Angeles, filmed for the HBO Special on aging "If I'm not in the Obituary, I'll have Breakfast" starring Carl Reiner, and featuring other young at heart +90 treasures, Mel Brooks, Norman Lear, Stan Lee & Betty White among others. Van Dyke was recorded using Frank Sinatra's microphone. Young at Heart (Frank Sinatra song) "Young at Heart" is a pop standard, a ballad with music by Johnny Richards and lyrics by Carolyn Leigh. The song was written and published in 1953, with Leigh contributing - 'Comme d''habitude a relationship that is falling out of love, while the English language version is set at the end of a lifetime, approaching death, and looking back without regret – expressing feelings that are more related to Piaf''s song "Non, je ne regrette rien". Many artists sang "Comme d''Habitude" in French after Claude François''s success (and international success through ''"My Way"), notably: David Bowie has said that in 1968 – the year before Paul Anka acquired the French song – his manager, Kenneth Pitt, asked him to write English lyrics for "Comme d''habitude" but that his version, titled "Even a Fool' - Frank Sinatra Me" with Billy May, designed as a musical world tour. It reached the top spot on the Billboard album chart in its second week, remaining at the top for five weeks, and was nominated for the Grammy Award for Album of the Year at the inaugural Grammy Awards. The title song, "Come Fly With Me", written especially for him, would become one of his best known standards. On May 29 he recorded seven songs in a single session, more than double the usual yield of a recording session, and an eighth was planned, "Lush Life", but Sinatra found it too - Frank Sinatra Original Song. Sinatra released "Softly, as I Leave You", and collaborated with Bing Crosby and Fred Waring on "America, I Hear You Singing", a collection of patriotic songs recorded as a tribute to the assassinated President John F. Kennedy. Sinatra increasingly became involved in charitable pursuits in this period. In 1961 and 1962 he went to Mexico, with the sole purpose of putting on performances for Mexican charities, and in July 1964 he was present for the dedication of the Frank Sinatra International Youth Center for Arab and Jewish children in Nazareth. Sinatra's phenomenal success in 1965, coinciding with his - Comme ci comme ça (Basim song) to the charm of it all. Working both Danish and Moroccan Arabic, Basim sings about a girl he is ready to commit to. It doesn’t mater what she wants to do — it’s comme ci comme ça — and he just wants her." An official music video to accompany the release of "Comme ci comme ça" was first released onto YouTube on 20 September 2017 at a total length of three minutes and twelve seconds. Comme ci comme ça (Basim song) "Comme ci comme ça" is a song performed by Danish pop singer and songwriter Basim, featuring vocals from Gilli. - Personal life of Frank Sinatra A third child, Christina Sinatra, known as "Tina", was born on June 20, 1948. Nancy Barbato Sinatra and Frank Sinatra announced their separation on Valentine's Day, February 14, 1950, with Frank's additional extra-marital affair with Ava Gardner compounding his transgressions and becoming public knowledge once again. After originally just seeking a legal separation, Frank and Nancy Sinatra decided some months later to file for divorce, and this divorce became legally final on October 29, 1951. Frank Sinatra's affair and relationship with Gardner had become more and more serious, and she later became his second wife. What was perhaps less widely - source_sentence: What was the name of the first Indiana Jones movie? sentences: - Indiana Jones and the Temple of Doom point. Old-time, 15-part movie serials didn't have shape. They just went on and on and on, which is what "Temple of Doom" does with humor and technical invention." Neal Gabler commented that "I think in some ways, "Indiana Jones and the Temple of Doom" was better than "Raiders of the Lost Ark". In some ways it was less. In sum total, I'd have to say I enjoyed it more. That doesn't mean it's better necessarily, but I got more enjoyment out of it." Colin Covert of the "Star Tribune" called the film "sillier, darkly violent and a bit dumbed down, - Indiana Jones and the Temple of Doom (1985 video game) Theme music plays in the background which is the best part of the game. Most of the sound effects are not sharp and not enough of them exist. "Indiana Jones and the Temple of Doom" is a bad game all the way around. It looks bad, has bad controls, and is way too short." Indiana Jones and the Temple of Doom (1985 video game) Indiana Jones and The Temple of Doom is a 1985 action arcade game developed and published by Atari Games, based on the 1984 film of the same name, the second film in the "Indiana Jones" franchise. - Indiana Jones and the Spear of Destiny Indiana Jones and the Spear of Destiny Indiana Jones and The Spear of Destiny is a four-issue comic book mini-series published by Dark Horse Comics from April to July 1995. It was their seventh series about the adult Indiana Jones. Indiana Jones reached for the Holy Grail, perched in a crack in the Temple of the Sun. Hanging onto him, his father, Professor Henry Jones urged him to let it go, and Indy turned back and let his father help him up. As the Joneses ride out into the Canyon of the Crescent Moon with Marcus Brody and Sallah, they - Lego Indiana Jones sets" The line was discontinued in 2010, but since Lucas plans to make a fifth installment to the franchise, the sets may be re-released along with new sets of the possible fifth Indiana Jones film. Due to the fact Disney bought Lucasfilm and will be making a new Indiana Jones movie, chances of new sets are high. The Indiana Jones sets proved to be one of the most popular Lego themes, and by the end of 2008 were credited, along with Lego Star Wars, of boosting the Lego Group's profits within a stagnant toy market. The product line was said - Indiana Jones and the Staff of Kings point-and-click adventure "Indiana Jones and the Fate of Atlantis". GameSpot criticized its "terribly laid-out checkpoints", "out-of-date" visuals, and "atrocious, annoying motion controls". Indiana Jones and the Staff of Kings The game was initially developed for the higher-end PlayStation 3 and Xbox 360 systems, before switching to the aforementioned lower-end platforms. As a result, both systems never saw a proper "Indiana Jones" video game being released besides the "" duology. The plot centers around Indy's search for the Staff of Moses. The Wii version of the game includes an exclusive co-op story mode (with Indy and Henry Jones Sr.) and unlockable - 'Indiana Jones and the Last Crusade: The Graphic Adventure Indiana Jones and the Last Crusade: The Graphic Adventure Indiana Jones and the Last Crusade: The Graphic Adventure is a graphic adventure game, released in 1989 (to coincide with the release of the film of the same name), published by Lucasfilm Games (now LucasArts). It was the third game to use the SCUMM engine. "Last Crusade" was one of the most innovative of the LucasArts adventures. It expanded on LucasArts'' traditional adventure game structure by including a flexible point system—the IQ score, or "Indy Quotient"—and by allowing the game to be completed in several different ways. The point system was' - source_sentence: '"Who was the Anglo-Irish scientist who, in the 17th century, discovered that ""the volume of a given mass of gas at a given temperature is inversely proportional to its pressure""?"' sentences: - 'Gay-Lussac''s law Gay-Lussac''s law Gay-Lussac''s law can refer to several discoveries made by French chemist Joseph Louis Gay-Lussac (1778–1850) and other scientists in the late 18th and early 19th centuries pertaining to thermal expansion of gases and the relationship between temperature, volume, and pressure. It states that the pressure of a given mass of gas varies directly with the absolute temperature of the gas, when the volume is kept constant. Mathematically, it can be written as: P/T=constant, Gay-Lussac is most often recognized for the Pressure Law which established that the pressure of an enclosed gas is directly proportional to its temperature and' - 'Gas constant "V" is the volume of gas (SI unit cubic metres), "n" is the amount of gas (SI unit moles), "m" is the mass (SI unit kilograms) contained in "V", and "T" is the thermodynamic temperature (SI unit kelvins). "R" is the molar-weight-specific gas constant, discussed below. The gas constant is expressed in the same physical units as molar entropy and molar heat capacity. From the general equation "PV" = "nRT" we get: where "P" is pressure, "V" is volume, "n" is number of moles of a given substance, and "T" is temperature. As pressure is defined as force per unit' - The Boy Who Was a King term. The film presents not only the life of the former Tsar, but also intertwines within the story vignettes of various Bulgarians, who were supporting him, sending him gifts, or merely tattooing his face on their body. The story is told through personal footage and vast amounts of archive material. The film received praise for its editing and use of archives with Variety's Robert Koehler writing that "Pic’s terrific use of archival footage includes an exiled Simeon interviewed in the early ’60s, disputing his playboy rep." and "Editing is aces." The Boy Who Was a King The Boy Who Was - Francis Hauksbee In 1708, Hauksbee independently discovered Charles's law of gases, which states that, for a given mass of gas at a constant pressure, the volume of the gas is proportional to its temperature. Hauksbee published accounts of his experiments in the Royal Society's journal "Philosophical Transactions". In 1709 he self-published "Physico-Mechanical Experiments on Various Subjects" which collected together many of these experiments along with discussion that summarized much of his scientific work. An Italian translation was published in 1716. A second edition was published posthumously in 1719. There were also translations to Dutch (1735) and French (1754). The Royal Society Hauksbee - 'Boyle''s law air moves from high to low pressure. Related phenomena: Other gas laws: Boyle''s law Boyle''s law, sometimes referred to as the Boyle–Mariotte law, or Mariotte''s law (especially in France), is an experimental gas law that describes how the pressure of a gas tends to increase as the volume of the container decreases. A modern statement of Boyle''s law is The absolute pressure exerted by a given mass of an ideal gas is inversely proportional to the volume it occupies if the temperature and amount of gas remain unchanged within a closed system. Mathematically, Boyle''s law can be stated as or' - Boyle's law of the gas, and "k" is a constant. The equation states that the product of pressure and volume is a constant for a given mass of confined gas and this holds as long as the temperature is constant. For comparing the same substance under two different sets of conditions, the law can be usefully expressed as The equation shows that, as volume increases, the pressure of the gas decreases in proportion. Similarly, as volume decreases, the pressure of the gas increases. The law was named after chemist and physicist Robert Boyle, who published the original law in 1662. This relationship - source_sentence: Peter Stuyvesant, born in Holland, became Governor of which American city in 1647? sentences: - Peter Stuyvesant at the corner of Thirteenth Street and Third Avenue until 1867 when it was destroyed by a storm, bearing fruit almost to the last. The house was destroyed by fire in 1777. He also built an executive mansion of stone called Whitehall. In 1645, Stuyvesant married Judith Bayard (–1687) of the Bayard family. Her brother, Samuel Bayard, was the husband of Stuyvesant's sister, Anna Stuyvesant. Petrus and Judith had two sons together. He died in August 1672 and his body was entombed in the east wall of St. Mark's Church in-the-Bowery, which sits on the site of Stuyvesant’s family chapel. - 'Peter Stuyvesant (cigarette) can amount to millions of dollars and finally criminal prosecution - if companies wilfully break the laws. However last year, when questioned on why no such action was being pursued against Imperial Tobacco a spokeswoman for Federal Health said: ""No instances of non-compliance with the Act have been identified by the Department that warrant the initiation of Court proceedings in the first instance, and without attempting alternative dispute resolution to achieve compliance"". Peter Stuyvesant is or was sold in the following countries: Canada, United States, United Kingdom, Luxembourg, Belgium, The Netherlands, Germany, France, Austria, Switzerland, Spain, Italy, Czech Republic, Greece,' - Jochem Pietersen Kuyter September 25, 1647, until the city was incorporated, in 1653, when he was made schout (sheriff). Kuyter twice came in conflict with the Director of New Netherland. Kuyter was a man of good education, what is evident by his dealings with Willem Kieft., who he believed damaged the colony with his policies and the start of Kieft's War in 1643. In 1647, when Peter Stuyvesant arrived in New Amsterdam to replace Kieft, Kuyter and Cornelis Melyn acting in name of the citizens of New Amsterdam, brought charges against the outgoing governor, demanding an investigation of his conduct while in office. - Peter Stuyvesant (cigarette) half of its regular users"" and called the packaging changes ""the ultimate sick joke from big tobacco"". In 2013, it was reported that Imperial Tobacco Australia had sent marketing material to WA tobacco retailers which promotes limited edition packs of "Peter Stuyvesant + Loosie", which came with 26 cigarettes. The material included images of a young woman with pink hair putting on lipstick and men on the streets of New York and also included a calendar and small poster that were clearly intended to glamorise smoking. Anti-smoking campaigner Mike Daube said although the material did not break the law because - 'Peter Stuyvesant but the order was soon revoked under pressure from the States of Holland and the city of Amsterdam. Stuyvesant prepared against an attack by ordering the citizens to dig a ditch from the North River to the East River and to erect a fortification. In 1653, a convention of two deputies from each village in New Netherland demanded reforms, and Stuyvesant commanded that assembly to disperse, saying: "We derive our authority from God and the company, not from a few ignorant subjects." In the summer of 1655, he sailed down the Delaware River with a fleet of seven vessels and' - Peter Stuyvesant Dutch Reformed church, a Calvinist denomination, holding to the Three Forms of Unity (Belgic Confession, Heidelberg Catechism, Canons of Dordt). The English were Anglicans, holding to the 39 Articles, a Protestant confession, with bishops. In 1665, Stuyvesant went to the Netherlands to report on his term as governor. On his return to the colony, he spent the remainder of his life on his farm of sixty-two acres outside the city, called the Great Bouwerie, beyond which stretched the woods and swamps of the village of Nieuw Haarlem. A pear tree that he reputedly brought from the Netherlands in 1647 remained --- # SentenceTransformer based on seongil-dn/unsupervised_20m_3800 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [seongil-dn/unsupervised_20m_3800](https://huggingface.co/seongil-dn/unsupervised_20m_3800). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [seongil-dn/unsupervised_20m_3800](https://huggingface.co/seongil-dn/unsupervised_20m_3800) <!-- at revision 1cda749f242e2b5c9e4f3c1122a61e76fec1fee5 --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("seongil-dn/bge-m3-756") # Run inference sentences = [ 'Peter Stuyvesant, born in Holland, became Governor of which American city in 1647?', 'Peter Stuyvesant (cigarette) half of its regular users"" and called the packaging changes ""the ultimate sick joke from big tobacco"". In 2013, it was reported that Imperial Tobacco Australia had sent marketing material to WA tobacco retailers which promotes limited edition packs of "Peter Stuyvesant + Loosie", which came with 26 cigarettes. The material included images of a young woman with pink hair putting on lipstick and men on the streets of New York and also included a calendar and small poster that were clearly intended to glamorise smoking. Anti-smoking campaigner Mike Daube said although the material did not break the law because', 'Peter Stuyvesant (cigarette) can amount to millions of dollars and finally criminal prosecution - if companies wilfully break the laws. However last year, when questioned on why no such action was being pursued against Imperial Tobacco a spokeswoman for Federal Health said: ""No instances of non-compliance with the Act have been identified by the Department that warrant the initiation of Court proceedings in the first instance, and without attempting alternative dispute resolution to achieve compliance"". Peter Stuyvesant is or was sold in the following countries: Canada, United States, United Kingdom, Luxembourg, Belgium, The Netherlands, Germany, France, Austria, Switzerland, Spain, Italy, Czech Republic, Greece,', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 1,138,596 training samples * Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, and <code>negative_5</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | negative_2 | negative_3 | negative_4 | negative_5 | |:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | string | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 22.32 tokens</li><li>max: 119 tokens</li></ul> | <ul><li>min: 127 tokens</li><li>mean: 157.45 tokens</li><li>max: 420 tokens</li></ul> | <ul><li>min: 122 tokens</li><li>mean: 154.65 tokens</li><li>max: 212 tokens</li></ul> | <ul><li>min: 122 tokens</li><li>mean: 155.52 tokens</li><li>max: 218 tokens</li></ul> | <ul><li>min: 122 tokens</li><li>mean: 156.04 tokens</li><li>max: 284 tokens</li></ul> | <ul><li>min: 124 tokens</li><li>mean: 156.3 tokens</li><li>max: 268 tokens</li></ul> | <ul><li>min: 121 tokens</li><li>mean: 156.15 tokens</li><li>max: 249 tokens</li></ul> | * Samples: | anchor | positive | negative | negative_2 | negative_3 | negative_4 | negative_5 | |:---------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What African country is projected to pass the United States in population by the year 2055?</code> | <code>African immigration to the United States officially 40,000 African immigrants, although it has been estimated that the population is actually four times this number when considering undocumented immigrants. The majority of these immigrants were born in Ethiopia, Egypt, Nigeria, and South Africa. African immigrants like many other immigrant groups are likely to establish and find success in small businesses. Many Africans that have seen the social and economic stability that comes from ethnic enclaves such as Chinatowns have recently been establishing ethnic enclaves of their own at much higher rates to reap the benefits of such communities. Such examples include Little Ethiopia in Los Angeles and</code> | <code>What Will Happen to the Gang Next Year? watching television at the time of the broadcast. This made it the lowest-rated episode in "30 Rock"<nowiki>'</nowiki>s history. and a decrease from the previous episode "The Return of Avery Jessup" (2.92 million) What Will Happen to the Gang Next Year? "What Will Happen to the Gang Next Year?" is the twenty-second and final episode of the sixth season of the American television comedy series "30 Rock", and the 125th overall episode of the series. It was directed by Michael Engler, and written by Matt Hubbard. The episode originally aired on the National Broadcasting Company (NBC) network in the United States</code> | <code>Christianity in the United States Christ is the fifth-largest denomination, the largest Pentecostal church, and the largest traditionally African-American denomination in the nation. Among Eastern Christian denominations, there are several Eastern Orthodox and Oriental Orthodox churches, with just below 1 million adherents in the US, or 0.4% of the total population. Christianity was introduced to the Americas as it was first colonized by Europeans beginning in the 16th and 17th centuries. Going forward from its foundation, the United States has been called a Protestant nation by a variety of sources. Immigration further increased Christian numbers. Today most Christian churches in the United States are either</code> | <code>What Will Happen to the Gang Next Year? What Will Happen to the Gang Next Year? "What Will Happen to the Gang Next Year?" is the twenty-second and final episode of the sixth season of the American television comedy series "30 Rock", and the 125th overall episode of the series. It was directed by Michael Engler, and written by Matt Hubbard. The episode originally aired on the National Broadcasting Company (NBC) network in the United States on May 17, 2012. In the episode, Jack (Alec Baldwin) and Avery (Elizabeth Banks) seek to renew their vows; Criss (James Marsden) sets out to show Liz (Tina Fey) he can pay</code> | <code>History of the Jews in the United States Representatives by Rep. Samuel Dickstein (D; New York). This also failed to pass. During the Holocaust, fewer than 30,000 Jews a year reached the United States, and some were turned away due to immigration policies. The U.S. did not change its immigration policies until 1948. Currently, laws requiring teaching of the Holocaust are on the books in five states. The Holocaust had a profound impact on the community in the United States, especially after 1960, as Jews tried to comprehend what had happened, and especially to commemorate and grapple with it when looking to the future. Abraham Joshua Heschel summarized</code> | <code>Public holidays in the United States will have very few customers that day. The labor force in the United States comprises about 62% (as of 2014) of the general population. In the United States, 97% of the private sector businesses determine what days this sector of the population gets paid time off, according to a study by the Society for Human Resource Management. The following holidays are observed by the majority of US businesses with paid time off: This list of holidays is based off the official list of federal holidays by year from the US Government. The holidays however are at the discretion of employers</code> | | <code>Which is the largest species of the turtle family?</code> | <code>Loggerhead sea turtle turtle is debated, but most authors consider it a single polymorphic species. Molecular genetics has confirmed hybridization of the loggerhead sea turtle with the Kemp's ridley sea turtle, hawksbill sea turtle, and green sea turtles. The extent of natural hybridization is not yet determined; however, second-generation hybrids have been reported, suggesting some hybrids are fertile. Although evidence is lacking, modern sea turtles probably descended from a single common ancestor during the Cretaceous period. Like all other sea turtles except the leatherback, loggerheads are members of the ancient family Cheloniidae, and appeared about 40 million years ago. Of the six species</code> | <code>Convention on the Conservation of Migratory Species of Wild Animals take joint action. At May 2018, there were 126 Parties to the Convention. The CMS Family covers a great diversity of migratory species. The Appendices of CMS include many mammals, including land mammals, marine mammals and bats; birds; fish; reptiles and one insect. Among the instruments, AEWA covers 254 species of birds that are ecologically dependent on wetlands for at least part of their annual cycle. EUROBATS covers 52 species of bat, the Memorandum of Understanding on the Conservation of Migratory Sharks seven species of shark, the IOSEA Marine Turtle MOU six species of marine turtle and the Raptors MoU</code> | <code>Razor-backed musk turtle Razor-backed musk turtle The razor-backed musk turtle ("Sternotherus carinatus") is a species of turtle in the family Kinosternidae. The species is native to the southern United States. There are no subspecies that are recognized as being valid. "S. carinatus" is found in the states of Alabama, Arkansas, Louisiana, Mississippi, Oklahoma, and Texas. The razor-backed musk turtle grows to a straight carapace length of about . It has a brown-colored carapace, with black markings at the edges of each scute. The carapace has a distinct, sharp keel down the center of its length, giving the species its common name. The body</code> | <code>African helmeted turtle African helmeted turtle The African helmeted turtle ("Pelomedusa subrufa"), also known commonly as the marsh terrapin, the crocodile turtle, or in the pet trade as the African side-necked turtle, is a species of omnivorous side-necked terrapin in the family Pelomedusidae. The species naturally occurs in fresh and stagnant water bodies throughout much of Sub-Saharan Africa, and in southern Yemen. The marsh terrapin is typically a rather small turtle, with most individuals being less than in straight carapace length, but one has been recorded with a length of . It has a black or brown carapace. The top of the tail</code> | <code>Box turtle Box turtle Box turtles are North American turtles of the genus Terrapene. Although box turtles are superficially similar to tortoises in terrestrial habits and overall appearance, they are actually members of the American pond turtle family (Emydidae). The twelve taxa which are distinguished in the genus are distributed over four species. They are largely characterized by having a domed shell, which is hinged at the bottom, allowing the animal to close its shell tightly to escape predators. The genus name "Terrapene" was coined by Merrem in 1820 as a genus separate from "Emys" for those species which had a sternum</code> | <code>Vallarta mud turtle Vallarta mud turtle The Vallarta mud turtle ("Kinosternon vogti") is a recently identified species of mud turtle in the family Kinosternidae. While formerly considered conspecific with the Jalisco mud turtle, further studies indicated that it was a separate species. It can be identified by a combination of the number of plastron and carapace scutes, body size, and the distinctive yellow rostral shield in males. It is endemic to Mexican state of Jalisco. It is only known from a few human-created or human-affected habitats (such as small streams and ponds) found around Puerto Vallarta. It is one of only 3 species</code> | | <code>How many gallons of beer are in an English barrel?</code> | <code>Low-alcohol beer Prohibition in the United States. Near beer could not legally be labeled as "beer" and was officially classified as a "cereal beverage". The public, however, almost universally called it "near beer". The most popular "near beer" was Bevo, brewed by the Anheuser-Busch company. The Pabst company brewed "Pablo", Miller brewed "Vivo", and Schlitz brewed "Famo". Many local and regional breweries stayed in business by marketing their own near-beers. By 1921 production of near beer had reached over 300 million US gallons (1 billion L) a year (36 L/s). A popular illegal practice was to add alcohol to near beer. The</code> | <code>Keg terms "half-barrel" and "quarter-barrel" are derived from the U.S. beer barrel, legally defined as being equal to 31 U.S. gallons (this is not the same volume as some other units also known as "barrels"). A 15.5 U.S. gallon keg is also equal to: However, beer kegs can come in many sizes: In European countries the most common keg size is 50 liters. This includes the UK, which uses a non-metric standard keg of 11 imperial gallons, which is coincidentally equal to . The German DIN 6647-1 and DIN 6647-2 have also defined kegs in the sizes of 30 and 20</code> | <code>Beer in Chile craft beers. They are generally low or very low volume producers. In Chile there are more than 150 craft beer producers distributed along the 15 Chilean Regions. The list below includes: Beer in Chile The primary beer brewed and consumed in Chile is pale lager, though the country also has a tradition of brewing corn beer, known as chicha. Chile’s beer history has a strong German influence – some of the bigger beer producers are from the country’s southern lake district, a region populated by a great number of German immigrants during the 19th century. Chile also produces English ale-style</code> | <code>Barrel variation. In modern times, produce barrels for all dry goods, excepting cranberries, contain 7,056 cubic inches, about 115.627 L. Barrel A barrel, cask, or tun is a hollow cylindrical container, traditionally made of wooden staves bound by wooden or metal hoops. Traditionally, the barrel was a standard size of measure referring to a set capacity or weight of a given commodity. For example, in the UK a barrel of beer refers to a quantity of . Wine was shipped in barrels of . Modern wooden barrels for wine-making are either made of French common oak ("Quercus robur") and white oak</code> | <code>The Rare Barrel The Rare Barrel The Rare Barrel is a brewery and brewpub in Berkeley, California, United States, that exclusively produces sour beers. Founders Jay Goodwin and Alex Wallash met while attending UCSB. They started home-brewing in their apartment and decided that they would one day start a brewery together. Goodwin started working at The Bruery, where he worked his way from a production assistant to brewer, eventually becoming the head of their barrel aging program. The Rare Barrel brewed its first batch of beer in February 2013, and opened its tasting room on December 27, 2013. The Rare Barrel was named</code> | <code>Barrel (unit) Barrel (unit) A barrel is one of several units of volume applied in various contexts; there are dry barrels, fluid barrels (such as the UK beer barrel and US beer barrel), oil barrels and so on. For historical reasons the volumes of some barrel units are roughly double the volumes of others; volumes in common usage range from about . In many connections the term "drum" is used almost interchangeably with "barrel". Since medieval times the term barrel as a unit of measure has had various meanings throughout Europe, ranging from about 100 litres to 1000 litres. The name was</code> | * Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters: ```json {'guide': SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ), 'temperature': 0.01} ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 1024 - `learning_rate`: 3e-05 - `weight_decay`: 0.01 - `warmup_ratio`: 0.05 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 1024 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0036 | 1 | 1.0283 | | 0.0072 | 2 | 1.0155 | | 0.0108 | 3 | 0.9858 | | 0.0144 | 4 | 0.9519 | | 0.0181 | 5 | 0.9434 | | 0.0217 | 6 | 0.898 | | 0.0253 | 7 | 0.8798 | | 0.0289 | 8 | 0.7976 | | 0.0325 | 9 | 0.7797 | | 0.0361 | 10 | 0.7464 | | 0.0397 | 11 | 0.743 | | 0.0433 | 12 | 0.716 | | 0.0469 | 13 | 0.7076 | | 0.0505 | 14 | 0.666 | | 0.0542 | 15 | 0.631 | | 0.0578 | 16 | 0.5905 | | 0.0614 | 17 | 0.6537 | | 0.0650 | 18 | 0.5755 | | 0.0686 | 19 | 0.5422 | | 0.0722 | 20 | 0.5393 | | 0.0758 | 21 | 0.5741 | | 0.0794 | 22 | 0.498 | | 0.0830 | 23 | 0.5522 | | 0.0866 | 24 | 0.5592 | | 0.0903 | 25 | 0.4797 | | 0.0939 | 26 | 0.4684 | | 0.0975 | 27 | 0.5207 | | 0.1011 | 28 | 0.4692 | | 0.1047 | 29 | 0.4459 | | 0.1083 | 30 | 0.4439 | | 0.1119 | 31 | 0.4656 | | 0.1155 | 32 | 0.4737 | | 0.1191 | 33 | 0.4391 | | 0.1227 | 34 | 0.4386 | | 0.1264 | 35 | 0.4107 | | 0.1300 | 36 | 0.4513 | | 0.1336 | 37 | 0.3789 | | 0.1372 | 38 | 0.4103 | | 0.1408 | 39 | 0.3929 | | 0.1444 | 40 | 0.4226 | | 0.1480 | 41 | 0.391 | | 0.1516 | 42 | 0.3674 | | 0.1552 | 43 | 0.3607 | | 0.1588 | 44 | 0.3738 | | 0.1625 | 45 | 0.3842 | | 0.1661 | 46 | 0.3498 | | 0.1697 | 47 | 0.3586 | | 0.1733 | 48 | 0.3538 | | 0.1769 | 49 | 0.3572 | | 0.1805 | 50 | 0.3547 | | 0.1841 | 51 | 0.3179 | | 0.1877 | 52 | 0.3436 | | 0.1913 | 53 | 0.3502 | | 0.1949 | 54 | 0.3381 | | 0.1986 | 55 | 0.3547 | | 0.2022 | 56 | 0.3362 | | 0.2058 | 57 | 0.3407 | | 0.2094 | 58 | 0.31 | | 0.2130 | 59 | 0.3039 | | 0.2166 | 60 | 0.3362 | | 0.2202 | 61 | 0.2948 | | 0.2238 | 62 | 0.3429 | | 0.2274 | 63 | 0.3096 | | 0.2310 | 64 | 0.35 | | 0.2347 | 65 | 0.2997 | | 0.2383 | 66 | 0.3258 | | 0.2419 | 67 | 0.3376 | | 0.2455 | 68 | 0.3213 | | 0.2491 | 69 | 0.3185 | | 0.2527 | 70 | 0.3282 | | 0.2563 | 71 | 0.2988 | | 0.2599 | 72 | 0.33 | | 0.2635 | 73 | 0.3066 | | 0.2671 | 74 | 0.3303 | | 0.2708 | 75 | 0.3067 | | 0.2744 | 76 | 0.2996 | | 0.2780 | 77 | 0.3063 | | 0.2816 | 78 | 0.3235 | | 0.2852 | 79 | 0.2902 | | 0.2888 | 80 | 0.302 | | 0.2924 | 81 | 0.3223 | | 0.2960 | 82 | 0.297 | | 0.2996 | 83 | 0.2936 | | 0.3032 | 84 | 0.3279 | | 0.3069 | 85 | 0.2973 | | 0.3105 | 86 | 0.2881 | | 0.3141 | 87 | 0.3014 | | 0.3177 | 88 | 0.2986 | | 0.3213 | 89 | 0.3057 | | 0.3249 | 90 | 0.2887 | | 0.3285 | 91 | 0.2765 | | 0.3321 | 92 | 0.2818 | | 0.3357 | 93 | 0.2904 | | 0.3394 | 94 | 0.267 | | 0.3430 | 95 | 0.2948 | | 0.3466 | 96 | 0.2766 | | 0.3502 | 97 | 0.2782 | | 0.3538 | 98 | 0.3082 | | 0.3574 | 99 | 0.2697 | | 0.3610 | 100 | 0.3006 | | 0.3646 | 101 | 0.2986 | | 0.3682 | 102 | 0.2789 | | 0.3718 | 103 | 0.2756 | | 0.3755 | 104 | 0.2884 | | 0.3791 | 105 | 0.273 | | 0.3827 | 106 | 0.2687 | | 0.3863 | 107 | 0.2808 | | 0.3899 | 108 | 0.2763 | | 0.3935 | 109 | 0.2738 | | 0.3971 | 110 | 0.2642 | | 0.4007 | 111 | 0.2612 | | 0.4043 | 112 | 0.2859 | | 0.4079 | 113 | 0.2558 | | 0.4116 | 114 | 0.2565 | | 0.4152 | 115 | 0.2747 | | 0.4188 | 116 | 0.2684 | | 0.4224 | 117 | 0.2643 | | 0.4260 | 118 | 0.241 | | 0.4296 | 119 | 0.2563 | | 0.4332 | 120 | 0.2754 | | 0.4368 | 121 | 0.2503 | | 0.4404 | 122 | 0.2544 | | 0.4440 | 123 | 0.2729 | | 0.4477 | 124 | 0.2589 | | 0.4513 | 125 | 0.2626 | | 0.4549 | 126 | 0.2693 | | 0.4585 | 127 | 0.2687 | | 0.4621 | 128 | 0.2903 | | 0.4657 | 129 | 0.2663 | | 0.4693 | 130 | 0.2604 | | 0.4729 | 131 | 0.2601 | | 0.4765 | 132 | 0.2649 | | 0.4801 | 133 | 0.2597 | | 0.4838 | 134 | 0.2608 | | 0.4874 | 135 | 0.245 | | 0.4910 | 136 | 0.2587 | | 0.4946 | 137 | 0.2618 | | 0.4982 | 138 | 0.2599 | | 0.5018 | 139 | 0.265 | | 0.5054 | 140 | 0.2427 | | 0.5090 | 141 | 0.2448 | | 0.5126 | 142 | 0.2608 | | 0.5162 | 143 | 0.2188 | | 0.5199 | 144 | 0.2471 | | 0.5235 | 145 | 0.2604 | | 0.5271 | 146 | 0.2571 | | 0.5307 | 147 | 0.2684 | | 0.5343 | 148 | 0.2319 | | 0.5379 | 149 | 0.2572 | | 0.5415 | 150 | 0.2243 | | 0.5451 | 151 | 0.2562 | | 0.5487 | 152 | 0.2457 | | 0.5523 | 153 | 0.255 | | 0.5560 | 154 | 0.2664 | | 0.5596 | 155 | 0.24 | | 0.5632 | 156 | 0.2612 | | 0.5668 | 157 | 0.243 | | 0.5704 | 158 | 0.2345 | | 0.5740 | 159 | 0.2359 | | 0.5776 | 160 | 0.2384 | | 0.5812 | 161 | 0.2541 | | 0.5848 | 162 | 0.2496 | | 0.5884 | 163 | 0.2429 | | 0.5921 | 164 | 0.2411 | | 0.5957 | 165 | 0.2261 | | 0.5993 | 166 | 0.2164 | | 0.6029 | 167 | 0.2251 | | 0.6065 | 168 | 0.2417 | | 0.6101 | 169 | 0.2494 | | 0.6137 | 170 | 0.2359 | | 0.6173 | 171 | 0.2489 | | 0.6209 | 172 | 0.2261 | | 0.6245 | 173 | 0.2367 | | 0.6282 | 174 | 0.2355 | | 0.6318 | 175 | 0.2423 | | 0.6354 | 176 | 0.2454 | | 0.6390 | 177 | 0.2438 | | 0.6426 | 178 | 0.2415 | | 0.6462 | 179 | 0.2237 | | 0.6498 | 180 | 0.2419 | | 0.6534 | 181 | 0.2373 | | 0.6570 | 182 | 0.2659 | | 0.6606 | 183 | 0.2201 | | 0.6643 | 184 | 0.2342 | | 0.6679 | 185 | 0.2149 | | 0.6715 | 186 | 0.2241 | | 0.6751 | 187 | 0.2443 | | 0.6787 | 188 | 0.2489 | | 0.6823 | 189 | 0.2354 | | 0.6859 | 190 | 0.2483 | | 0.6895 | 191 | 0.2193 | | 0.6931 | 192 | 0.229 | | 0.6968 | 193 | 0.2335 | | 0.7004 | 194 | 0.2484 | | 0.7040 | 195 | 0.2317 | | 0.7076 | 196 | 0.2203 | | 0.7112 | 197 | 0.2329 | | 0.7148 | 198 | 0.2084 | | 0.7184 | 199 | 0.2341 | | 0.7220 | 200 | 0.2369 | | 0.7256 | 201 | 0.2364 | | 0.7292 | 202 | 0.2276 | | 0.7329 | 203 | 0.215 | | 0.7365 | 204 | 0.2486 | | 0.7401 | 205 | 0.2237 | | 0.7437 | 206 | 0.218 | | 0.7473 | 207 | 0.2444 | | 0.7509 | 208 | 0.2276 | | 0.7545 | 209 | 0.2127 | | 0.7581 | 210 | 0.2283 | | 0.7617 | 211 | 0.2234 | | 0.7653 | 212 | 0.207 | | 0.7690 | 213 | 0.24 | | 0.7726 | 214 | 0.2317 | | 0.7762 | 215 | 0.2056 | | 0.7798 | 216 | 0.2149 | | 0.7834 | 217 | 0.2211 | | 0.7870 | 218 | 0.2232 | | 0.7906 | 219 | 0.2222 | | 0.7942 | 220 | 0.2481 | | 0.7978 | 221 | 0.227 | | 0.8014 | 222 | 0.2305 | | 0.8051 | 223 | 0.2091 | | 0.8087 | 224 | 0.2278 | | 0.8123 | 225 | 0.2123 | | 0.8159 | 226 | 0.2233 | | 0.8195 | 227 | 0.2365 | | 0.8231 | 228 | 0.2165 | | 0.8267 | 229 | 0.2192 | | 0.8303 | 230 | 0.2145 | | 0.8339 | 231 | 0.2382 | | 0.8375 | 232 | 0.2232 | | 0.8412 | 233 | 0.2273 | | 0.8448 | 234 | 0.2296 | | 0.8484 | 235 | 0.2229 | | 0.8520 | 236 | 0.2213 | | 0.8556 | 237 | 0.2343 | | 0.8592 | 238 | 0.2208 | | 0.8628 | 239 | 0.2315 | | 0.8664 | 240 | 0.2137 | | 0.8700 | 241 | 0.2201 | | 0.8736 | 242 | 0.2185 | | 0.8773 | 243 | 0.2337 | | 0.8809 | 244 | 0.2153 | | 0.8845 | 245 | 0.2369 | | 0.8881 | 246 | 0.2216 | | 0.8917 | 247 | 0.2338 | | 0.8953 | 248 | 0.2241 | | 0.8989 | 249 | 0.213 | | 0.9025 | 250 | 0.2245 | | 0.9061 | 251 | 0.2074 | | 0.9097 | 252 | 0.2283 | | 0.9134 | 253 | 0.2003 | | 0.9170 | 254 | 0.2099 | | 0.9206 | 255 | 0.2288 | | 0.9242 | 256 | 0.2168 | | 0.9278 | 257 | 0.215 | | 0.9314 | 258 | 0.2146 | | 0.9350 | 259 | 0.2126 | | 0.9386 | 260 | 0.2178 | | 0.9422 | 261 | 0.2065 | | 0.9458 | 262 | 0.2327 | | 0.9495 | 263 | 0.2116 | | 0.9531 | 264 | 0.2324 | | 0.9567 | 265 | 0.2235 | | 0.9603 | 266 | 0.2189 | | 0.9639 | 267 | 0.2175 | | 0.9675 | 268 | 0.2171 | | 0.9711 | 269 | 0.1925 | | 0.9747 | 270 | 0.225 | | 0.9783 | 271 | 0.2149 | | 0.9819 | 272 | 0.204 | | 0.9856 | 273 | 0.2004 | | 0.9892 | 274 | 0.2055 | | 0.9928 | 275 | 0.2045 | | 0.9964 | 276 | 0.2186 | | 1.0 | 277 | 0.2215 | | 1.0036 | 278 | 0.1545 | | 1.0072 | 279 | 0.169 | | 1.0108 | 280 | 0.152 | | 1.0144 | 281 | 0.1597 | | 1.0181 | 282 | 0.1626 | | 1.0217 | 283 | 0.1692 | | 1.0253 | 284 | 0.1639 | | 1.0289 | 285 | 0.1638 | | 1.0325 | 286 | 0.1507 | | 1.0361 | 287 | 0.1594 | | 1.0397 | 288 | 0.1621 | | 1.0433 | 289 | 0.1565 | | 1.0469 | 290 | 0.1549 | | 1.0505 | 291 | 0.1731 | | 1.0542 | 292 | 0.152 | | 1.0578 | 293 | 0.1586 | | 1.0614 | 294 | 0.1593 | | 1.0650 | 295 | 0.1406 | | 1.0686 | 296 | 0.1524 | | 1.0722 | 297 | 0.1474 | | 1.0758 | 298 | 0.158 | | 1.0794 | 299 | 0.1743 | | 1.0830 | 300 | 0.1485 | | 1.0866 | 301 | 0.1648 | | 1.0903 | 302 | 0.1337 | | 1.0939 | 303 | 0.1554 | | 1.0975 | 304 | 0.1434 | | 1.1011 | 305 | 0.1642 | | 1.1047 | 306 | 0.159 | | 1.1083 | 307 | 0.1658 | | 1.1119 | 308 | 0.1554 | | 1.1155 | 309 | 0.1425 | | 1.1191 | 310 | 0.1432 | | 1.1227 | 311 | 0.1517 | | 1.1264 | 312 | 0.148 | | 1.1300 | 313 | 0.1636 | | 1.1336 | 314 | 0.1735 | | 1.1372 | 315 | 0.151 | | 1.1408 | 316 | 0.1423 | | 1.1444 | 317 | 0.1501 | | 1.1480 | 318 | 0.1537 | | 1.1516 | 319 | 0.1554 | | 1.1552 | 320 | 0.1553 | | 1.1588 | 321 | 0.149 | | 1.1625 | 322 | 0.1605 | | 1.1661 | 323 | 0.1551 | | 1.1697 | 324 | 0.1555 | | 1.1733 | 325 | 0.1443 | | 1.1769 | 326 | 0.1533 | | 1.1805 | 327 | 0.1658 | | 1.1841 | 328 | 0.15 | | 1.1877 | 329 | 0.1626 | | 1.1913 | 330 | 0.172 | | 1.1949 | 331 | 0.1542 | | 1.1986 | 332 | 0.166 | | 1.2022 | 333 | 0.1513 | | 1.2058 | 334 | 0.1612 | | 1.2094 | 335 | 0.1521 | | 1.2130 | 336 | 0.1552 | | 1.2166 | 337 | 0.1503 | | 1.2202 | 338 | 0.1613 | | 1.2238 | 339 | 0.1563 | | 1.2274 | 340 | 0.1429 | | 1.2310 | 341 | 0.1587 | | 1.2347 | 342 | 0.1477 | | 1.2383 | 343 | 0.1561 | | 1.2419 | 344 | 0.1418 | | 1.2455 | 345 | 0.1495 | | 1.2491 | 346 | 0.1533 | | 1.2527 | 347 | 0.1521 | | 1.2563 | 348 | 0.1422 | | 1.2599 | 349 | 0.1446 | | 1.2635 | 350 | 0.146 | | 1.2671 | 351 | 0.1473 | | 1.2708 | 352 | 0.1566 | | 1.2744 | 353 | 0.1411 | | 1.2780 | 354 | 0.1502 | | 1.2816 | 355 | 0.1383 | | 1.2852 | 356 | 0.1622 | | 1.2888 | 357 | 0.1391 | | 1.2924 | 358 | 0.1455 | | 1.2960 | 359 | 0.1541 | | 1.2996 | 360 | 0.1476 | | 1.3032 | 361 | 0.1662 | | 1.3069 | 362 | 0.1476 | | 1.3105 | 363 | 0.1452 | | 1.3141 | 364 | 0.1372 | | 1.3177 | 365 | 0.1542 | | 1.3213 | 366 | 0.1531 | | 1.3249 | 367 | 0.1623 | | 1.3285 | 368 | 0.1544 | | 1.3321 | 369 | 0.1625 | | 1.3357 | 370 | 0.1459 | | 1.3394 | 371 | 0.1474 | | 1.3430 | 372 | 0.1499 | | 1.3466 | 373 | 0.1495 | | 1.3502 | 374 | 0.1361 | | 1.3538 | 375 | 0.1444 | | 1.3574 | 376 | 0.1495 | | 1.3610 | 377 | 0.1583 | | 1.3646 | 378 | 0.1642 | | 1.3682 | 379 | 0.1646 | | 1.3718 | 380 | 0.1595 | | 1.3755 | 381 | 0.149 | | 1.3791 | 382 | 0.1448 | | 1.3827 | 383 | 0.1603 | | 1.3863 | 384 | 0.1269 | | 1.3899 | 385 | 0.1491 | | 1.3935 | 386 | 0.1367 | | 1.3971 | 387 | 0.1501 | | 1.4007 | 388 | 0.1414 | | 1.4043 | 389 | 0.156 | | 1.4079 | 390 | 0.1428 | | 1.4116 | 391 | 0.1559 | | 1.4152 | 392 | 0.1452 | | 1.4188 | 393 | 0.1547 | | 1.4224 | 394 | 0.1432 | | 1.4260 | 395 | 0.1648 | | 1.4296 | 396 | 0.166 | | 1.4332 | 397 | 0.1485 | | 1.4368 | 398 | 0.1494 | | 1.4404 | 399 | 0.1635 | | 1.4440 | 400 | 0.1498 | | 1.4477 | 401 | 0.1509 | | 1.4513 | 402 | 0.1431 | | 1.4549 | 403 | 0.1547 | | 1.4585 | 404 | 0.1576 | | 1.4621 | 405 | 0.1426 | | 1.4657 | 406 | 0.132 | | 1.4693 | 407 | 0.1511 | | 1.4729 | 408 | 0.1551 | | 1.4765 | 409 | 0.16 | | 1.4801 | 410 | 0.1507 | | 1.4838 | 411 | 0.1591 | | 1.4874 | 412 | 0.1536 | | 1.4910 | 413 | 0.1507 | | 1.4946 | 414 | 0.1564 | | 1.4982 | 415 | 0.153 | | 1.5018 | 416 | 0.1404 | | 1.5054 | 417 | 0.1627 | | 1.5090 | 418 | 0.1432 | | 1.5126 | 419 | 0.1456 | | 1.5162 | 420 | 0.1369 | | 1.5199 | 421 | 0.1554 | | 1.5235 | 422 | 0.1412 | | 1.5271 | 423 | 0.1547 | | 1.5307 | 424 | 0.1555 | | 1.5343 | 425 | 0.1575 | | 1.5379 | 426 | 0.1595 | | 1.5415 | 427 | 0.1464 | | 1.5451 | 428 | 0.1738 | | 1.5487 | 429 | 0.1692 | | 1.5523 | 430 | 0.1566 | | 1.5560 | 431 | 0.1452 | | 1.5596 | 432 | 0.1433 | | 1.5632 | 433 | 0.1584 | | 1.5668 | 434 | 0.1579 | | 1.5704 | 435 | 0.157 | | 1.5740 | 436 | 0.1533 | | 1.5776 | 437 | 0.148 | | 1.5812 | 438 | 0.1381 | | 1.5848 | 439 | 0.1605 | | 1.5884 | 440 | 0.163 | | 1.5921 | 441 | 0.1492 | | 1.5957 | 442 | 0.1601 | | 1.5993 | 443 | 0.1456 | | 1.6029 | 444 | 0.1439 | | 1.6065 | 445 | 0.1553 | | 1.6101 | 446 | 0.1371 | | 1.6137 | 447 | 0.1382 | | 1.6173 | 448 | 0.1458 | | 1.6209 | 449 | 0.14 | | 1.6245 | 450 | 0.1463 | | 1.6282 | 451 | 0.1433 | | 1.6318 | 452 | 0.1472 | | 1.6354 | 453 | 0.1481 | | 1.6390 | 454 | 0.1408 | | 1.6426 | 455 | 0.1525 | | 1.6462 | 456 | 0.1223 | | 1.6498 | 457 | 0.1452 | | 1.6534 | 458 | 0.159 | | 1.6570 | 459 | 0.1389 | | 1.6606 | 460 | 0.1479 | | 1.6643 | 461 | 0.1451 | | 1.6679 | 462 | 0.1651 | | 1.6715 | 463 | 0.1336 | | 1.6751 | 464 | 0.1496 | | 1.6787 | 465 | 0.1384 | | 1.6823 | 466 | 0.143 | | 1.6859 | 467 | 0.1423 | | 1.6895 | 468 | 0.1403 | | 1.6931 | 469 | 0.1577 | | 1.6968 | 470 | 0.1511 | | 1.7004 | 471 | 0.1429 | | 1.7040 | 472 | 0.1445 | | 1.7076 | 473 | 0.1431 | | 1.7112 | 474 | 0.1326 | | 1.7148 | 475 | 0.1554 | | 1.7184 | 476 | 0.1406 | | 1.7220 | 477 | 0.1479 | | 1.7256 | 478 | 0.1521 | | 1.7292 | 479 | 0.1475 | | 1.7329 | 480 | 0.1584 | | 1.7365 | 481 | 0.1393 | | 1.7401 | 482 | 0.1291 | | 1.7437 | 483 | 0.1373 | | 1.7473 | 484 | 0.1555 | | 1.7509 | 485 | 0.1473 | | 1.7545 | 486 | 0.1654 | | 1.7581 | 487 | 0.1568 | | 1.7617 | 488 | 0.1557 | | 1.7653 | 489 | 0.1531 | | 1.7690 | 490 | 0.1385 | | 1.7726 | 491 | 0.1381 | | 1.7762 | 492 | 0.1375 | | 1.7798 | 493 | 0.1472 | | 1.7834 | 494 | 0.1581 | | 1.7870 | 495 | 0.1448 | | 1.7906 | 496 | 0.1443 | | 1.7942 | 497 | 0.1422 | | 1.7978 | 498 | 0.1295 | | 1.8014 | 499 | 0.1463 | | 1.8051 | 500 | 0.1346 | | 1.8087 | 501 | 0.1387 | | 1.8123 | 502 | 0.1463 | | 1.8159 | 503 | 0.1439 | | 1.8195 | 504 | 0.1404 | | 1.8231 | 505 | 0.1433 | | 1.8267 | 506 | 0.136 | | 1.8303 | 507 | 0.14 | | 1.8339 | 508 | 0.1355 | | 1.8375 | 509 | 0.1446 | | 1.8412 | 510 | 0.1564 | | 1.8448 | 511 | 0.1413 | | 1.8484 | 512 | 0.1451 | | 1.8520 | 513 | 0.1453 | | 1.8556 | 514 | 0.1484 | | 1.8592 | 515 | 0.1403 | | 1.8628 | 516 | 0.1568 | | 1.8664 | 517 | 0.1566 | | 1.8700 | 518 | 0.1318 | | 1.8736 | 519 | 0.1483 | | 1.8773 | 520 | 0.1339 | | 1.8809 | 521 | 0.1423 | | 1.8845 | 522 | 0.1349 | | 1.8881 | 523 | 0.1302 | | 1.8917 | 524 | 0.1341 | | 1.8953 | 525 | 0.1456 | | 1.8989 | 526 | 0.1334 | | 1.9025 | 527 | 0.1382 | | 1.9061 | 528 | 0.1462 | | 1.9097 | 529 | 0.1315 | | 1.9134 | 530 | 0.1606 | | 1.9170 | 531 | 0.1308 | | 1.9206 | 532 | 0.1319 | | 1.9242 | 533 | 0.1407 | | 1.9278 | 534 | 0.1385 | | 1.9314 | 535 | 0.1471 | | 1.9350 | 536 | 0.1621 | | 1.9386 | 537 | 0.1436 | | 1.9422 | 538 | 0.151 | | 1.9458 | 539 | 0.1423 | | 1.9495 | 540 | 0.1411 | | 1.9531 | 541 | 0.1535 | | 1.9567 | 542 | 0.143 | | 1.9603 | 543 | 0.149 | | 1.9639 | 544 | 0.1384 | | 1.9675 | 545 | 0.1479 | | 1.9711 | 546 | 0.1452 | | 1.9747 | 547 | 0.1372 | | 1.9783 | 548 | 0.1418 | | 1.9819 | 549 | 0.1443 | | 1.9856 | 550 | 0.1344 | | 1.9892 | 551 | 0.1278 | | 1.9928 | 552 | 0.1447 | | 1.9964 | 553 | 0.1366 | | 2.0 | 554 | 0.141 | | 2.0036 | 555 | 0.1161 | | 2.0072 | 556 | 0.1099 | | 2.0108 | 557 | 0.126 | | 2.0144 | 558 | 0.1163 | | 2.0181 | 559 | 0.1234 | | 2.0217 | 560 | 0.1171 | | 2.0253 | 561 | 0.1073 | | 2.0289 | 562 | 0.1126 | | 2.0325 | 563 | 0.1175 | | 2.0361 | 564 | 0.1086 | | 2.0397 | 565 | 0.1038 | | 2.0433 | 566 | 0.1121 | | 2.0469 | 567 | 0.1154 | | 2.0505 | 568 | 0.0973 | | 2.0542 | 569 | 0.1208 | | 2.0578 | 570 | 0.1064 | | 2.0614 | 571 | 0.1159 | | 2.0650 | 572 | 0.1093 | | 2.0686 | 573 | 0.113 | | 2.0722 | 574 | 0.1033 | | 2.0758 | 575 | 0.1152 | | 2.0794 | 576 | 0.1029 | | 2.0830 | 577 | 0.1204 | | 2.0866 | 578 | 0.1079 | | 2.0903 | 579 | 0.1288 | | 2.0939 | 580 | 0.0998 | | 2.0975 | 581 | 0.1058 | | 2.1011 | 582 | 0.1235 | | 2.1047 | 583 | 0.1059 | | 2.1083 | 584 | 0.0998 | | 2.1119 | 585 | 0.1142 | | 2.1155 | 586 | 0.1082 | | 2.1191 | 587 | 0.0973 | | 2.1227 | 588 | 0.1017 | | 2.1264 | 589 | 0.1045 | | 2.1300 | 590 | 0.123 | | 2.1336 | 591 | 0.1065 | | 2.1372 | 592 | 0.1135 | | 2.1408 | 593 | 0.1027 | | 2.1444 | 594 | 0.1166 | | 2.1480 | 595 | 0.1082 | | 2.1516 | 596 | 0.1113 | | 2.1552 | 597 | 0.1108 | | 2.1588 | 598 | 0.114 | | 2.1625 | 599 | 0.1064 | | 2.1661 | 600 | 0.0955 | | 2.1697 | 601 | 0.113 | | 2.1733 | 602 | 0.1136 | | 2.1769 | 603 | 0.1125 | | 2.1805 | 604 | 0.1146 | | 2.1841 | 605 | 0.1054 | | 2.1877 | 606 | 0.1144 | | 2.1913 | 607 | 0.1038 | | 2.1949 | 608 | 0.1113 | | 2.1986 | 609 | 0.1187 | | 2.2022 | 610 | 0.1166 | | 2.2058 | 611 | 0.1035 | | 2.2094 | 612 | 0.1054 | | 2.2130 | 613 | 0.118 | | 2.2166 | 614 | 0.125 | | 2.2202 | 615 | 0.1142 | | 2.2238 | 616 | 0.1119 | | 2.2274 | 617 | 0.1173 | | 2.2310 | 618 | 0.1024 | | 2.2347 | 619 | 0.105 | | 2.2383 | 620 | 0.1025 | | 2.2419 | 621 | 0.1022 | | 2.2455 | 622 | 0.0995 | | 2.2491 | 623 | 0.1022 | | 2.2527 | 624 | 0.1198 | | 2.2563 | 625 | 0.0995 | | 2.2599 | 626 | 0.1162 | | 2.2635 | 627 | 0.1172 | | 2.2671 | 628 | 0.1037 | | 2.2708 | 629 | 0.1093 | | 2.2744 | 630 | 0.1018 | | 2.2780 | 631 | 0.1168 | | 2.2816 | 632 | 0.1015 | | 2.2852 | 633 | 0.101 | | 2.2888 | 634 | 0.1064 | | 2.2924 | 635 | 0.1185 | | 2.2960 | 636 | 0.1055 | | 2.2996 | 637 | 0.1142 | | 2.3032 | 638 | 0.0966 | | 2.3069 | 639 | 0.1039 | | 2.3105 | 640 | 0.1139 | | 2.3141 | 641 | 0.1181 | | 2.3177 | 642 | 0.1168 | | 2.3213 | 643 | 0.1201 | | 2.3249 | 644 | 0.0984 | | 2.3285 | 645 | 0.1068 | | 2.3321 | 646 | 0.1007 | | 2.3357 | 647 | 0.1179 | | 2.3394 | 648 | 0.1043 | | 2.3430 | 649 | 0.1213 | | 2.3466 | 650 | 0.1027 | | 2.3502 | 651 | 0.1119 | | 2.3538 | 652 | 0.1077 | | 2.3574 | 653 | 0.1061 | | 2.3610 | 654 | 0.1054 | | 2.3646 | 655 | 0.1135 | | 2.3682 | 656 | 0.1136 | | 2.3718 | 657 | 0.1062 | | 2.3755 | 658 | 0.1105 | | 2.3791 | 659 | 0.1157 | | 2.3827 | 660 | 0.1036 | | 2.3863 | 661 | 0.1098 | | 2.3899 | 662 | 0.1195 | | 2.3935 | 663 | 0.1151 | | 2.3971 | 664 | 0.1116 | | 2.4007 | 665 | 0.1086 | | 2.4043 | 666 | 0.1151 | | 2.4079 | 667 | 0.1156 | | 2.4116 | 668 | 0.116 | | 2.4152 | 669 | 0.1055 | | 2.4188 | 670 | 0.1051 | | 2.4224 | 671 | 0.0952 | | 2.4260 | 672 | 0.1012 | | 2.4296 | 673 | 0.1042 | | 2.4332 | 674 | 0.1069 | | 2.4368 | 675 | 0.1148 | | 2.4404 | 676 | 0.0981 | | 2.4440 | 677 | 0.1131 | | 2.4477 | 678 | 0.1026 | | 2.4513 | 679 | 0.1014 | | 2.4549 | 680 | 0.1071 | | 2.4585 | 681 | 0.1171 | | 2.4621 | 682 | 0.1009 | | 2.4657 | 683 | 0.1056 | | 2.4693 | 684 | 0.1107 | | 2.4729 | 685 | 0.1114 | | 2.4765 | 686 | 0.1118 | | 2.4801 | 687 | 0.1166 | | 2.4838 | 688 | 0.1023 | | 2.4874 | 689 | 0.1154 | | 2.4910 | 690 | 0.0968 | | 2.4946 | 691 | 0.1164 | | 2.4982 | 692 | 0.1221 | | 2.5018 | 693 | 0.1131 | | 2.5054 | 694 | 0.1039 | | 2.5090 | 695 | 0.1022 | | 2.5126 | 696 | 0.1052 | | 2.5162 | 697 | 0.1072 | | 2.5199 | 698 | 0.1062 | | 2.5235 | 699 | 0.1035 | | 2.5271 | 700 | 0.107 | | 2.5307 | 701 | 0.1152 | | 2.5343 | 702 | 0.0991 | | 2.5379 | 703 | 0.1139 | | 2.5415 | 704 | 0.1148 | | 2.5451 | 705 | 0.1099 | | 2.5487 | 706 | 0.1064 | | 2.5523 | 707 | 0.1069 | | 2.5560 | 708 | 0.1104 | | 2.5596 | 709 | 0.1157 | | 2.5632 | 710 | 0.1109 | | 2.5668 | 711 | 0.0991 | | 2.5704 | 712 | 0.105 | | 2.5740 | 713 | 0.1104 | | 2.5776 | 714 | 0.1134 | | 2.5812 | 715 | 0.1252 | | 2.5848 | 716 | 0.1205 | | 2.5884 | 717 | 0.112 | | 2.5921 | 718 | 0.1109 | | 2.5957 | 719 | 0.1151 | | 2.5993 | 720 | 0.097 | | 2.6029 | 721 | 0.1018 | | 2.6065 | 722 | 0.1205 | | 2.6101 | 723 | 0.107 | | 2.6137 | 724 | 0.102 | | 2.6173 | 725 | 0.1106 | | 2.6209 | 726 | 0.1068 | | 2.6245 | 727 | 0.1024 | | 2.6282 | 728 | 0.1153 | | 2.6318 | 729 | 0.0984 | | 2.6354 | 730 | 0.1019 | | 2.6390 | 731 | 0.1029 | | 2.6426 | 732 | 0.1147 | | 2.6462 | 733 | 0.1081 | | 2.6498 | 734 | 0.0996 | | 2.6534 | 735 | 0.1133 | | 2.6570 | 736 | 0.1102 | | 2.6606 | 737 | 0.1063 | | 2.6643 | 738 | 0.1119 | | 2.6679 | 739 | 0.1062 | | 2.6715 | 740 | 0.1021 | | 2.6751 | 741 | 0.1058 | | 2.6787 | 742 | 0.1026 | | 2.6823 | 743 | 0.1049 | | 2.6859 | 744 | 0.0894 | | 2.6895 | 745 | 0.1127 | | 2.6931 | 746 | 0.1107 | | 2.6968 | 747 | 0.1134 | | 2.7004 | 748 | 0.103 | | 2.7040 | 749 | 0.1081 | | 2.7076 | 750 | 0.1156 | | 2.7112 | 751 | 0.1092 | | 2.7148 | 752 | 0.1182 | | 2.7184 | 753 | 0.1092 | | 2.7220 | 754 | 0.1077 | | 2.7256 | 755 | 0.1165 | | 2.7292 | 756 | 0.1109 | </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.4.1 - Transformers: 4.49.0 - PyTorch: 2.5.1+cu124 - Accelerate: 1.4.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION", "TRANSLATION" ]
[ "CRAFT" ]
Non_BioNLP
LoneStriker/OpenBioLLM-Llama3-8B-GGUF
LoneStriker
null
[ "gguf", "llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:llama3", "endpoints_compatible", "region:us" ]
1,714
1,714
30
1
--- base_model: meta-llama/Meta-Llama-3-8B language: - en license: llama3 tags: - llama-3 - llama - Mixtral - instruct - finetune - chatml - DPO - RLHF - gpt4 - distillation widget: - example_title: OpenBioLLM-8B messages: - role: system content: You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. - role: user content: How long does it take for newborn jaundice to go away? output: text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment. The duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines: 1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment. It''s important for parents to monitor their newborn''s jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn''s jaundice, it''s always best to consult with a healthcare professional for personalized advice and guidance.' model-index: - name: OpenBioLLM-8B results: [] --- <div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/2FhDh8NDvMl7iSxbQz9BP.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 8 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-8B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 1 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B** # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
[ "QUESTION_ANSWERING" ]
[ "MEDQA", "PUBMEDQA" ]
BioNLP
medspaner/mdeberta-v3-base-es-trials-misc-ents
medspaner
token-classification
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "arxiv:2111.09543", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,705
1,727
12
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: 'Motivo de consulta: migraña leve. Exploración: Tensión arterial: 120/70 mmHg.' model-index: - name: mdeberta-v3-base-es-trials-misc-ents results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-es-trials-misc-ents This medical named entity recognition model detects the following clinical entities: - Concept: e.g. *fecha de inclusión*, 'inclusion date'. - Food\_or\_Dring: e.g. *soja*, 'soy'; *leche*, 'milk'. - Observation\_or\_Finding: e.g. *normotenso*, 'normal tension'. - Quantifier\_or\_Qualifier: e.g. *grave*, 'severe'. - Result\_or\_Value: e.g. *< 3 LNS*, '< 3 UNL'. The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds): - Precision: 0.702 (±0.011) - Recall: 0.670 (±0.007) - F1: 0.686 (±0.004) - Accuracy: 0.955 (±0.001) ## Model description This model adapts the [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) model, which is a multilingual version of the model presented in [He et al. (2021)](https://arxiv.org/abs/2111.09543), pre-trained on 2.5T of data from the CommonCrawl corpus for 100 languages. We fine-tuned ``mdeberta-v3-base`` to conduct medical named entity recognition on Spanish texts about clinical trials using the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) vs 2. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) vs 2. It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: average 18.00 epochs (±2.74); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.702 (±0.011) | 0.670 (±0.007) | 0.686 (±0.004) | 0.955 (±0.001) | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "NAMED_ENTITY_RECOGNITION" ]
[ "SCIELO" ]
BioNLP
carsondial/slinger20241231-3
carsondial
sentence-similarity
["sentence-transformers","safetensors","bert","sentence-similarity","feature-extraction","generated_(...TRUNCATED)
1,735
1,735
6
0
"---\nbase_model: BAAI/bge-base-en-v1.5\nlanguage:\n- en\nlibrary_name: sentence-transformers\nlicen(...TRUNCATED)
[ "TEXT_CLASSIFICATION" ]
[ "CRAFT" ]
Non_BioNLP
StivenLancheros/Roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_en_es
StivenLancheros
token-classification
["transformers","pytorch","tensorboard","roberta","token-classification","generated_from_trainer","l(...TRUNCATED)
1,647
1,647
115
0
"---\nlicense: apache-2.0\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\ntags:\n- generated_fro(...TRUNCATED)
[ "NAMED_ENTITY_RECOGNITION" ]
[ "CRAFT" ]
BioNLP
bobox/DeBERTa-small-ST-v1-test-step2
bobox
sentence-similarity
["sentence-transformers","pytorch","deberta-v2","sentence-similarity","feature-extraction","generate(...TRUNCATED)
1,724
1,724
7
0
"---\nbase_model: bobox/DeBERTa-small-ST-v1-test\ndatasets:\n- jinaai/negation-dataset-v2\n- tals/vi(...TRUNCATED)
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
[ "MEDAL", "SCIQ", "SCITAIL" ]
Non_BioNLP
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
12