Dataset Viewer
Auto-converted to Parquet
id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
sequencelengths
1
4.05k
created_time
unknowndate
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
sequencelengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
sequencelengths
1
10
__index_level_0__
int64
0
46.9k
Baiming123/Calcu_Disease_Similarity
Baiming123
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "sentence-similarity", "dataset:Baiming123/MeSHDS", "base_model:sentence-transformers/multi-qa-MiniLM-L6-cos-v1", "base_model:finetune:sentence-transformers/multi-qa-MiniLM-L6-cos-v1", "doi:10.57967/hf/3108", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
"2024-09-20T15:58:13"
2024-12-14T10:10:29+00:00
0
3
--- base_model: - sentence-transformers/multi-qa-MiniLM-L6-cos-v1 datasets: - Baiming123/MeSHDS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity --- # Model Description This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.The 'Calcu_Disease_Similarity' model is designed to encode two disease terms and compute their **semantic similarity**. The model has been fine-tuned on disease-related datasets 'MeSHDS' and achieves a high F1 score in distinguishing experimentally validated miRNA-target interactions (MTIs) and predicted MTIs by considering disease similarity. If you use this model in your research, please cite the following paper: ``` @article {Chen2024.05.17.594604, author = {Chen, Baiming}, title = {miRTarDS: High-Accuracy Refining Protein-level MicroRNA Target Interactions from Prediction Databases Using Sentence-BERT}, elocation-id = {2024.05.17.594604}, year = {2024}, doi = {10.1101/2024.05.17.594604}, publisher = {Cold Spring Harbor Laboratory}, abstract = {MicroRNAs (miRNAs) regulate gene expression by binding to mRNAs, inhibiting translation, or promoting mRNA degradation. miRNAs are of great importance in the development of various diseases. Currently, numerous sequence-based miRNA target prediction tools are available, however, only 1\% of their predictions have been experimentally validated. In this study, we propose a novel approach that leverages disease similarity between miRNAs and genes as a key feature to further refine and screen human sequence-based predicted miRNA target interactions (MTIs). To quantify the semantic similarity of diseases, we fine-tuned the Sentence-BERT model. Our method achieved an F1 score of 0.88 in accurately distinguishing human protein-level experimentally validated MTIs (functional MTIs, validated through western blot or reporter assay) and predicted MTIs. Moreover, this method exhibits exceptional generalizability across different databases. We applied the proposed method to analyze 1,220,904 human MTIs sourced from miRTarbase, miRDB, and miRWalk, encompassing 6,085 genes and 1,261 pre-miRNAs. Notably, we accurately identified 3,883 out of 3,962 MTIs with strong experimental evidence from miRTarbase. This study has the potential to provide valuable insights into the understanding of miRNA-gene regulatory networks and to promote advancements in disease diagnosis, treatment, and drug development.Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2024/12/08/2024.05.17.594604}, eprint = {https://www.biorxiv.org/content/early/2024/12/08/2024.05.17.594604.full.pdf}, journal = {bioRxiv} } ``` ## Key Features: - Fine-tuned to compute semantic similarity between disease names. - Achieves an F1 score of 0.88 in distinguishing protein-level experimentally (western blot, reporter assay) validated MTIs and predicted MTIs. - Built for applications in understanding miRNA-gene regulatory networks, disease diagnosis, treatment, and drug discovery. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` # Usage (Sentence-Transformers) ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python # Load the pre-trained SBERT model from sentence_transformers import SentenceTransformer, util # Replace 'your/path/to/Calcu_Disease_Similarity' with the actual path to the model model = SentenceTransformer("Baiming123/Calcu_Disease_Similarity") # Example usage disease1 = "lung cancer" disease2 = "pulmonary fibrosis" def sts(sentence_a, sentence_b) -> float: query_emb = model.encode(sentence_a) doc_emb = model.encode(sentence_b) [score] = util.dot_score(query_emb, doc_emb)[0].tolist() return score similarity = sts(disease1, disease2) print(similarity) ``` # Additional Information ## License This model is licensed under CC-BY-NC 4.0 International license. If you use this model, please adhere to the license requirements. ## Questions or Issues If you encounter any issues or have any questions while using the model, feel free to reach out to the author for assistance. Thank you for your support and for using this model!
[ "MIRNA" ]
BioNLP
# Model Description This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.The 'Calcu_Disease_Similarity' model is designed to encode two disease terms and compute their **semantic similarity**. The model has been fine-tuned on disease-related datasets 'MeSHDS' and achieves a high F1 score in distinguishing experimentally validated miRNA-target interactions (MTIs) and predicted MTIs by considering disease similarity. If you use this model in your research, please cite the following paper: ``` @article {Chen2024.05.17.594604, author = {Chen, Baiming}, title = {miRTarDS: High-Accuracy Refining Protein-level MicroRNA Target Interactions from Prediction Databases Using Sentence-BERT}, elocation-id = {2024.05.17.594604}, year = {2024}, doi = {10.1101/2024.05.17.594604}, publisher = {Cold Spring Harbor Laboratory}, abstract = {MicroRNAs (miRNAs) regulate gene expression by binding to mRNAs, inhibiting translation, or promoting mRNA degradation. miRNAs are of great importance in the development of various diseases. Currently, numerous sequence-based miRNA target prediction tools are available, however, only 1\% of their predictions have been experimentally validated. In this study, we propose a novel approach that leverages disease similarity between miRNAs and genes as a key feature to further refine and screen human sequence-based predicted miRNA target interactions (MTIs). To quantify the semantic similarity of diseases, we fine-tuned the Sentence-BERT model. Our method achieved an F1 score of 0.88 in accurately distinguishing human protein-level experimentally validated MTIs (functional MTIs, validated through western blot or reporter assay) and predicted MTIs. Moreover, this method exhibits exceptional generalizability across different databases. We applied the proposed method to analyze 1,220,904 human MTIs sourced from miRTarbase, miRDB, and miRWalk, encompassing 6,085 genes and 1,261 pre-miRNAs. Notably, we accurately identified 3,883 out of 3,962 MTIs with strong experimental evidence from miRTarbase. This study has the potential to provide valuable insights into the understanding of miRNA-gene regulatory networks and to promote advancements in disease diagnosis, treatment, and drug development.Competing Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2024/12/08/2024.05.17.594604}, eprint = {https://www.biorxiv.org/content/early/2024/12/08/2024.05.17.594604.full.pdf}, journal = {bioRxiv} } ``` ## Key Features: - Fine-tuned to compute semantic similarity between disease names. - Achieves an F1 score of 0.88 in distinguishing protein-level experimentally (western blot, reporter assay) validated MTIs and predicted MTIs. - Built for applications in understanding miRNA-gene regulatory networks, disease diagnosis, treatment, and drug discovery. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` # Usage (Sentence-Transformers) ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python # Load the pre-trained SBERT model from sentence_transformers import SentenceTransformer, util # Replace 'your/path/to/Calcu_Disease_Similarity' with the actual path to the model model = SentenceTransformer("Baiming123/Calcu_Disease_Similarity") # Example usage disease1 = "lung cancer" disease2 = "pulmonary fibrosis" def sts(sentence_a, sentence_b) -> float: query_emb = model.encode(sentence_a) doc_emb = model.encode(sentence_b) [score] = util.dot_score(query_emb, doc_emb)[0].tolist() return score similarity = sts(disease1, disease2) print(similarity) ``` # Additional Information ## License This model is licensed under CC-BY-NC 4.0 International license. If you use this model, please adhere to the license requirements. ## Questions or Issues If you encounter any issues or have any questions while using the model, feel free to reach out to the author for assistance. Thank you for your support and for using this model!
{"base_model": ["sentence-transformers/multi-qa-MiniLM-L6-cos-v1"], "datasets": ["Baiming123/MeSHDS"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity"]}
dataset
null
0
johnsnowlabs/JSL-MedMNX-7B-SFT
johnsnowlabs
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "reward model", "RLHF", "medical", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
"2024-04-16T05:27:20"
2024-04-18T19:25:47+00:00
2,926
3
--- language: - en library_name: transformers license: cc-by-nc-nd-4.0 tags: - reward model - RLHF - medical --- # JSL-MedMNX-7B-SFT [<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com) JSL-MedMNX-7B-SFT is a 7 Billion parameter model developed by [John Snow Labs](https://www.johnsnowlabs.com/). This model is SFT-finetuned on alpaca format 11k medical dataset over the base model [JSL-MedMNX-7B](https://huggingface.co/johnsnowlabs/JSL-MedMNX-7B). Checkout the perofrmance on [Open Medical LLM Leaderboard](https://huggingface.co/spaces/openlifescienceai/open_medical_llm_leaderboard). This model is available under a [CC-BY-NC-ND](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en) license and must also conform to this [Acceptable Use Policy](https://huggingface.co/johnsnowlabs). If you need to license this model for commercial use, please contact us at [email protected]. ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "johnsnowlabs/JSL-MedMNX-7B-SFT" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## 🏆 Evaluation | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------------------------|-------|------|-----:|--------|-----:|---|-----:| |stem |N/A |none | 0|acc_norm|0.5209|± |0.0068| | | |none | 0|acc |0.5675|± |0.0058| | - medmcqa |Yaml |none | 0|acc |0.5152|± |0.0077| | | |none | 0|acc_norm|0.5152|± |0.0077| | - medqa_4options |Yaml |none | 0|acc |0.5397|± |0.0140| | | |none | 0|acc_norm|0.5397|± |0.0140| | - anatomy (mmlu) | 0|none | 0|acc |0.6593|± |0.0409| | - clinical_knowledge (mmlu) | 0|none | 0|acc |0.7245|± |0.0275| | - college_biology (mmlu) | 0|none | 0|acc |0.7431|± |0.0365| | - college_medicine (mmlu) | 0|none | 0|acc |0.6532|± |0.0363| | - medical_genetics (mmlu) | 0|none | 0|acc |0.7300|± |0.0446| | - professional_medicine (mmlu)| 0|none | 0|acc |0.7206|± |0.0273| | - pubmedqa | 1|none | 0|acc |0.7720|± |0.0188| |Groups|Version|Filter|n-shot| Metric |Value | |Stderr| |------|-------|------|-----:|--------|-----:|---|-----:| |stem |N/A |none | 0|acc_norm|0.5209|± |0.0068| | | |none | 0|acc |0.5675|± |0.0058|
[ "MEDQA", "PUBMEDQA" ]
BioNLP
# JSL-MedMNX-7B-SFT [<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com) JSL-MedMNX-7B-SFT is a 7 Billion parameter model developed by [John Snow Labs](https://www.johnsnowlabs.com/). This model is SFT-finetuned on alpaca format 11k medical dataset over the base model [JSL-MedMNX-7B](https://huggingface.co/johnsnowlabs/JSL-MedMNX-7B). Checkout the perofrmance on [Open Medical LLM Leaderboard](https://huggingface.co/spaces/openlifescienceai/open_medical_llm_leaderboard). This model is available under a [CC-BY-NC-ND](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en) license and must also conform to this [Acceptable Use Policy](https://huggingface.co/johnsnowlabs). If you need to license this model for commercial use, please contact us at [email protected]. ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "johnsnowlabs/JSL-MedMNX-7B-SFT" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## 🏆 Evaluation | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------------------------|-------|------|-----:|--------|-----:|---|-----:| |stem |N/A |none | 0|acc_norm|0.5209|± |0.0068| | | |none | 0|acc |0.5675|± |0.0058| | - medmcqa |Yaml |none | 0|acc |0.5152|± |0.0077| | | |none | 0|acc_norm|0.5152|± |0.0077| | - medqa_4options |Yaml |none | 0|acc |0.5397|± |0.0140| | | |none | 0|acc_norm|0.5397|± |0.0140| | - anatomy (mmlu) | 0|none | 0|acc |0.6593|± |0.0409| | - clinical_knowledge (mmlu) | 0|none | 0|acc |0.7245|± |0.0275| | - college_biology (mmlu) | 0|none | 0|acc |0.7431|± |0.0365| | - college_medicine (mmlu) | 0|none | 0|acc |0.6532|± |0.0363| | - medical_genetics (mmlu) | 0|none | 0|acc |0.7300|± |0.0446| | - professional_medicine (mmlu)| 0|none | 0|acc |0.7206|± |0.0273| | - pubmedqa | 1|none | 0|acc |0.7720|± |0.0188| |Groups|Version|Filter|n-shot| Metric |Value | |Stderr| |------|-------|------|-----:|--------|-----:|---|-----:| |stem |N/A |none | 0|acc_norm|0.5209|± |0.0068| | | |none | 0|acc |0.5675|± |0.0058|
{"language": ["en"], "library_name": "transformers", "license": "cc-by-nc-nd-4.0", "tags": ["reward model", "RLHF", "medical"]}
dataset
null
1
RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf
RichardErkhov
null
[ "gguf", "arxiv:2405.01886", "endpoints_compatible", "region:us", "conversational" ]
"2024-10-30T11:14:53"
2024-10-30T15:06:18+00:00
75
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama3-Aloe-8B-Alpha - GGUF - Model creator: https://huggingface.co/HPAI-BSC/ - Original model: https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama3-Aloe-8B-Alpha.Q2_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama3-Aloe-8B-Alpha.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama3-Aloe-8B-Alpha.Q3_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama3-Aloe-8B-Alpha.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama3-Aloe-8B-Alpha.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama3-Aloe-8B-Alpha.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama3-Aloe-8B-Alpha.Q4_0.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama3-Aloe-8B-Alpha.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama3-Aloe-8B-Alpha.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama3-Aloe-8B-Alpha.Q4_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama3-Aloe-8B-Alpha.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama3-Aloe-8B-Alpha.Q4_1.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama3-Aloe-8B-Alpha.Q5_0.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama3-Aloe-8B-Alpha.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama3-Aloe-8B-Alpha.Q5_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama3-Aloe-8B-Alpha.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama3-Aloe-8B-Alpha.Q5_1.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama3-Aloe-8B-Alpha.Q6_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama3-Aloe-8B-Alpha.Q8_0.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: cc-by-nc-4.0 datasets: - argilla/dpo-mix-7k - nvidia/HelpSteer - jondurbin/airoboros-3.2 - hkust-nlp/deita-10k-v0 - LDJnr/Capybara - HPAI-BSC/CareQA - GBaker/MedQA-USMLE-4-options - lukaemon/mmlu - bigbio/pubmed_qa - openlifescienceai/medmcqa - bigbio/med_qa - HPAI-BSC/better-safe-than-sorry - HPAI-BSC/pubmedqa-cot - HPAI-BSC/medmcqa-cot - HPAI-BSC/medqa-cot language: - en library_name: transformers tags: - biology - medical pipeline_tag: question-answering --- # Aloe: A New Family of Healthcare LLMs Aloe is a new family of healthcare LLMs that is highly competitive with all previous open models of its range and reaches state-of-the-art results at its size by using model merging and advanced prompting strategies. Aloe scores high in metrics measuring ethics and factuality, thanks to a combined red teaming and alignment effort. Complete training details, model merging configurations, and all training data (including synthetically generated data) will be shared. Additionally, the prompting repository used in this work to produce state-of-the-art results during inference will also be shared. Aloe comes with a healthcare-specific risk assessment to contribute to the safe use and deployment of such systems. <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/xlssx5_3_kLQlJlmE-aya.png" width="95%"> ## Model Details ### [](https://huggingface.co/templates/model-card-example#model-description)Model Description - **Developed by:** [HPAI](https://hpai.bsc.es/) - **Model type:** Causal decoder-only transformer language model - **Language(s) (NLP):** English (mainly) - **License:** This model is based on Meta Llama 3 8B and is governed by the [Meta Llama 3 License](https://llama.meta.com/llama3/license/). All our modifications are available with a [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. - **Finetuned from model :** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional] - **Repository:** https://github.com/HPAI-BSC/prompt_engine (more coming soon) - **Paper:** https://arxiv.org/abs/2405.01886 (more coming soon) ## Model Performance Aloe has been tested on the most popular healthcare QA datasets, with and without medprompting inference technique. Results show competitive performance, even against bigger models. <img src="https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/rQ4z-qXzKN44oAcFDbHi2.png" width="95%"> Results using advanced prompting methods (aka Medprompt) are achieved through a [repo](https://github.com/HPAI-BSC/prompt_engine) made public with this work. ## Uses ### Direct Use We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare. ### Out-of-Scope Use These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful for individuals, such as spam, fraud, or impersonation, is prohibited. ## Bias, Risks, and Limitations We consider three risk cases: - Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers. - Medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defences, together with the introduction of disclaimers and warnings on the models' outputs. - Access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it. Table below shows the performance of Aloe at several AI safety tasks: <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%"> ### Recommendations We avoid the use of all personal data in our training. Model safety cannot be guaranteed. Aloe can produce toxic content under the appropriate prompts. For these reasons, minors should not be left alone to interact with Aloe without supervision. ## How to Get Started with the Model Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "HPAI-BSC/Llama3-Aloe-8B-Alpha" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."}, {"role": "user", "content": "Hello."}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "HPAI-BSC/Llama3-Aloe-8B-Alpha" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."}, {"role": "user", "content": "Hello"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Training Details Supervised fine-tuning on top of Llama 3 8B using medical and general domain datasets, model merging using DARE-TIES process, two-stage DPO process for human preference alignment. More details coming soon. ### Training Data - Medical domain datasets, including synthetic data generated using Mixtral-8x7B and Genstruct - HPAI-BSC/pubmedqa-cot - HPAI-BSC/medqa-cot - HPAI-BSC/medmcqa-cot - LDJnr/Capybara - hkust-nlp/deita-10k-v0 - jondurbin/airoboros-3.2 - argilla/dpo-mix-7k - nvidia/HelpSteer - Custom preference data with adversarial prompts generated from Anthropic Harmless, Chen et al., and original prompts ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA) #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. ### Results <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/STlPSggXr9P9JeWAvmAsi.png" width="90%"> #### Summary To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. We produce the standard MultiMedQA score for reference, by computing the weighted average accuracy on all scores except CareQA. Additionally, we calculate the arithmetic mean across all datasets. The Medical MMLU is calculated by averaging the six medical subtasks: Anatomy, Clinical knowledge, College Biology, College medicine, Medical genetics, and Professional medicine. Benchmark results indicate the training conducted on Aloe has boosted its performance above Llama3-8B-Instruct. Llama3-Aloe-8B-Alpha outperforms larger models like Meditron 70B, and is close to larger base models, like Yi-34. For the former, this gain is consistent even when using SC-CoT, using their best-reported variant. All these results make Llama3-Aloe-8B-Alpha the best healthcare LLM of its size. With the help of prompting techniques the performance of Llama3-Aloe-8B-Alpha is significantly improved. Medprompting in particular provides a 7% increase in reported accuracy, after which Llama3-Aloe-8B-Alpha only lags behind the ten times bigger Llama-3-70B-Instruct. This improvement is mostly consistent across medical fields. Llama3-Aloe-8B-Alpha with medprompting beats the performance of Meditron 70B with their self reported 20 shot SC-CoT in MMLU med and is slightly worse in the other benchmarks. ## Environmental Impact - **Hardware Type:** 4xH100 - **Hours used:** 7,000 - **Hardware Provider:** Barcelona Supercomputing Center - **Compute Region:** Spain - **Carbon Emitted:** 439.25kg ## Model Card Authors [Ashwin Kumar Gururajan](https://huggingface.co/G-AshwinKumar) ## Model Card Contact mailto:[email protected] ## Citations If you use this repository in a published work, please cite the following papers as source: ``` @misc{gururajan2024aloe, title={Aloe: A Family of Fine-tuned Open Healthcare LLMs}, author={Ashwin Kumar Gururajan and Enrique Lopez-Cuena and Jordi Bayarri-Planas and Adrian Tormos and Daniel Hinjos and Pablo Bernabeu-Perez and Anna Arias-Duart and Pablo Agustin Martin-Torres and Lucia Urcelay-Ganzabal and Marta Gonzalez-Mallo and Sergio Alvarez-Napagao and Eduard Ayguadé-Parra and Ulises Cortés Dario Garcia-Gasulla}, year={2024}, eprint={2405.01886}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "MEDQA", "PUBMEDQA" ]
BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama3-Aloe-8B-Alpha - GGUF - Model creator: https://huggingface.co/HPAI-BSC/ - Original model: https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama3-Aloe-8B-Alpha.Q2_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama3-Aloe-8B-Alpha.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama3-Aloe-8B-Alpha.Q3_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama3-Aloe-8B-Alpha.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama3-Aloe-8B-Alpha.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama3-Aloe-8B-Alpha.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama3-Aloe-8B-Alpha.Q4_0.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama3-Aloe-8B-Alpha.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama3-Aloe-8B-Alpha.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama3-Aloe-8B-Alpha.Q4_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama3-Aloe-8B-Alpha.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama3-Aloe-8B-Alpha.Q4_1.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama3-Aloe-8B-Alpha.Q5_0.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama3-Aloe-8B-Alpha.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama3-Aloe-8B-Alpha.Q5_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama3-Aloe-8B-Alpha.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama3-Aloe-8B-Alpha.Q5_1.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama3-Aloe-8B-Alpha.Q6_K.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama3-Aloe-8B-Alpha.Q8_0.gguf](https://huggingface.co/RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf/blob/main/Llama3-Aloe-8B-Alpha.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: cc-by-nc-4.0 datasets: - argilla/dpo-mix-7k - nvidia/HelpSteer - jondurbin/airoboros-3.2 - hkust-nlp/deita-10k-v0 - LDJnr/Capybara - HPAI-BSC/CareQA - GBaker/MedQA-USMLE-4-options - lukaemon/mmlu - bigbio/pubmed_qa - openlifescienceai/medmcqa - bigbio/med_qa - HPAI-BSC/better-safe-than-sorry - HPAI-BSC/pubmedqa-cot - HPAI-BSC/medmcqa-cot - HPAI-BSC/medqa-cot language: - en library_name: transformers tags: - biology - medical pipeline_tag: question-answering --- # Aloe: A New Family of Healthcare LLMs Aloe is a new family of healthcare LLMs that is highly competitive with all previous open models of its range and reaches state-of-the-art results at its size by using model merging and advanced prompting strategies. Aloe scores high in metrics measuring ethics and factuality, thanks to a combined red teaming and alignment effort. Complete training details, model merging configurations, and all training data (including synthetically generated data) will be shared. Additionally, the prompting repository used in this work to produce state-of-the-art results during inference will also be shared. Aloe comes with a healthcare-specific risk assessment to contribute to the safe use and deployment of such systems. <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/xlssx5_3_kLQlJlmE-aya.png" width="95%"> ## Model Details ### [](https://huggingface.co/templates/model-card-example#model-description)Model Description - **Developed by:** [HPAI](https://hpai.bsc.es/) - **Model type:** Causal decoder-only transformer language model - **Language(s) (NLP):** English (mainly) - **License:** This model is based on Meta Llama 3 8B and is governed by the [Meta Llama 3 License](https://llama.meta.com/llama3/license/). All our modifications are available with a [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. - **Finetuned from model :** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional] - **Repository:** https://github.com/HPAI-BSC/prompt_engine (more coming soon) - **Paper:** https://arxiv.org/abs/2405.01886 (more coming soon) ## Model Performance Aloe has been tested on the most popular healthcare QA datasets, with and without medprompting inference technique. Results show competitive performance, even against bigger models. <img src="https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/rQ4z-qXzKN44oAcFDbHi2.png" width="95%"> Results using advanced prompting methods (aka Medprompt) are achieved through a [repo](https://github.com/HPAI-BSC/prompt_engine) made public with this work. ## Uses ### Direct Use We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare. ### Out-of-Scope Use These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful for individuals, such as spam, fraud, or impersonation, is prohibited. ## Bias, Risks, and Limitations We consider three risk cases: - Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers. - Medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defences, together with the introduction of disclaimers and warnings on the models' outputs. - Access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it. Table below shows the performance of Aloe at several AI safety tasks: <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%"> ### Recommendations We avoid the use of all personal data in our training. Model safety cannot be guaranteed. Aloe can produce toxic content under the appropriate prompts. For these reasons, minors should not be left alone to interact with Aloe without supervision. ## How to Get Started with the Model Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "HPAI-BSC/Llama3-Aloe-8B-Alpha" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."}, {"role": "user", "content": "Hello."}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "HPAI-BSC/Llama3-Aloe-8B-Alpha" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."}, {"role": "user", "content": "Hello"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Training Details Supervised fine-tuning on top of Llama 3 8B using medical and general domain datasets, model merging using DARE-TIES process, two-stage DPO process for human preference alignment. More details coming soon. ### Training Data - Medical domain datasets, including synthetic data generated using Mixtral-8x7B and Genstruct - HPAI-BSC/pubmedqa-cot - HPAI-BSC/medqa-cot - HPAI-BSC/medmcqa-cot - LDJnr/Capybara - hkust-nlp/deita-10k-v0 - jondurbin/airoboros-3.2 - argilla/dpo-mix-7k - nvidia/HelpSteer - Custom preference data with adversarial prompts generated from Anthropic Harmless, Chen et al., and original prompts ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA) #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. ### Results <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/STlPSggXr9P9JeWAvmAsi.png" width="90%"> #### Summary To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. We produce the standard MultiMedQA score for reference, by computing the weighted average accuracy on all scores except CareQA. Additionally, we calculate the arithmetic mean across all datasets. The Medical MMLU is calculated by averaging the six medical subtasks: Anatomy, Clinical knowledge, College Biology, College medicine, Medical genetics, and Professional medicine. Benchmark results indicate the training conducted on Aloe has boosted its performance above Llama3-8B-Instruct. Llama3-Aloe-8B-Alpha outperforms larger models like Meditron 70B, and is close to larger base models, like Yi-34. For the former, this gain is consistent even when using SC-CoT, using their best-reported variant. All these results make Llama3-Aloe-8B-Alpha the best healthcare LLM of its size. With the help of prompting techniques the performance of Llama3-Aloe-8B-Alpha is significantly improved. Medprompting in particular provides a 7% increase in reported accuracy, after which Llama3-Aloe-8B-Alpha only lags behind the ten times bigger Llama-3-70B-Instruct. This improvement is mostly consistent across medical fields. Llama3-Aloe-8B-Alpha with medprompting beats the performance of Meditron 70B with their self reported 20 shot SC-CoT in MMLU med and is slightly worse in the other benchmarks. ## Environmental Impact - **Hardware Type:** 4xH100 - **Hours used:** 7,000 - **Hardware Provider:** Barcelona Supercomputing Center - **Compute Region:** Spain - **Carbon Emitted:** 439.25kg ## Model Card Authors [Ashwin Kumar Gururajan](https://huggingface.co/G-AshwinKumar) ## Model Card Contact mailto:[email protected] ## Citations If you use this repository in a published work, please cite the following papers as source: ``` @misc{gururajan2024aloe, title={Aloe: A Family of Fine-tuned Open Healthcare LLMs}, author={Ashwin Kumar Gururajan and Enrique Lopez-Cuena and Jordi Bayarri-Planas and Adrian Tormos and Daniel Hinjos and Pablo Bernabeu-Perez and Anna Arias-Duart and Pablo Agustin Martin-Torres and Lucia Urcelay-Ganzabal and Marta Gonzalez-Mallo and Sergio Alvarez-Napagao and Eduard Ayguadé-Parra and Ulises Cortés Dario Garcia-Gasulla}, year={2024}, eprint={2405.01886}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{}
dataset
null
2
Rodrigo1771/bsc-bio-ehr-es-symptemist-word2vec-85-ner
Rodrigo1771
token-classification
[ "transformers", "tensorboard", "safetensors", "roberta", "token-classification", "generated_from_trainer", "dataset:Rodrigo1771/symptemist-85-ner", "base_model:PlanTL-GOB-ES/bsc-bio-ehr-es", "base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
"2024-09-04T19:00:28"
2024-09-04T19:11:15+00:00
13
0
--- base_model: PlanTL-GOB-ES/bsc-bio-ehr-es datasets: - Rodrigo1771/symptemist-85-ner library_name: transformers license: apache-2.0 metrics: - precision - recall - f1 - accuracy tags: - token-classification - generated_from_trainer model-index: - name: output results: - task: type: token-classification name: Token Classification dataset: name: Rodrigo1771/symptemist-85-ner type: Rodrigo1771/symptemist-85-ner config: SympTEMIST NER split: validation args: SympTEMIST NER metrics: - type: precision value: 0.6646525679758308 name: Precision - type: recall value: 0.722495894909688 name: Recall - type: f1 value: 0.6923682140047207 name: F1 - type: accuracy value: 0.9499021463633739 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the Rodrigo1771/symptemist-85-ner dataset. It achieves the following results on the evaluation set: - Loss: 0.2884 - Precision: 0.6647 - Recall: 0.7225 - F1: 0.6924 - Accuracy: 0.9499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 174 | 0.1427 | 0.5737 | 0.6667 | 0.6167 | 0.9481 | | No log | 2.0 | 348 | 0.1535 | 0.6222 | 0.7050 | 0.6610 | 0.9492 | | 0.1242 | 3.0 | 522 | 0.1802 | 0.6378 | 0.7181 | 0.6756 | 0.9486 | | 0.1242 | 4.0 | 696 | 0.2066 | 0.6301 | 0.7263 | 0.6748 | 0.9466 | | 0.1242 | 5.0 | 870 | 0.2270 | 0.6438 | 0.7181 | 0.6789 | 0.9476 | | 0.0245 | 6.0 | 1044 | 0.2420 | 0.6445 | 0.7225 | 0.6813 | 0.9476 | | 0.0245 | 7.0 | 1218 | 0.2623 | 0.6585 | 0.7252 | 0.6903 | 0.9491 | | 0.0245 | 8.0 | 1392 | 0.2849 | 0.6513 | 0.7176 | 0.6828 | 0.9484 | | 0.0086 | 9.0 | 1566 | 0.2880 | 0.6748 | 0.7088 | 0.6914 | 0.9504 | | 0.0086 | 10.0 | 1740 | 0.2884 | 0.6647 | 0.7225 | 0.6924 | 0.9499 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
[ "SYMPTEMIST" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the Rodrigo1771/symptemist-85-ner dataset. It achieves the following results on the evaluation set: - Loss: 0.2884 - Precision: 0.6647 - Recall: 0.7225 - F1: 0.6924 - Accuracy: 0.9499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 174 | 0.1427 | 0.5737 | 0.6667 | 0.6167 | 0.9481 | | No log | 2.0 | 348 | 0.1535 | 0.6222 | 0.7050 | 0.6610 | 0.9492 | | 0.1242 | 3.0 | 522 | 0.1802 | 0.6378 | 0.7181 | 0.6756 | 0.9486 | | 0.1242 | 4.0 | 696 | 0.2066 | 0.6301 | 0.7263 | 0.6748 | 0.9466 | | 0.1242 | 5.0 | 870 | 0.2270 | 0.6438 | 0.7181 | 0.6789 | 0.9476 | | 0.0245 | 6.0 | 1044 | 0.2420 | 0.6445 | 0.7225 | 0.6813 | 0.9476 | | 0.0245 | 7.0 | 1218 | 0.2623 | 0.6585 | 0.7252 | 0.6903 | 0.9491 | | 0.0245 | 8.0 | 1392 | 0.2849 | 0.6513 | 0.7176 | 0.6828 | 0.9484 | | 0.0086 | 9.0 | 1566 | 0.2880 | 0.6748 | 0.7088 | 0.6914 | 0.9504 | | 0.0086 | 10.0 | 1740 | 0.2884 | 0.6647 | 0.7225 | 0.6924 | 0.9499 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
{"base_model": "PlanTL-GOB-ES/bsc-bio-ehr-es", "datasets": ["Rodrigo1771/symptemist-85-ner"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["token-classification", "generated_from_trainer"], "model-index": [{"name": "output", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "Rodrigo1771/symptemist-85-ner", "type": "Rodrigo1771/symptemist-85-ner", "config": "SympTEMIST NER", "split": "validation", "args": "SympTEMIST NER"}, "metrics": [{"type": "precision", "value": 0.6646525679758308, "name": "Precision"}, {"type": "recall", "value": 0.722495894909688, "name": "Recall"}, {"type": "f1", "value": 0.6923682140047207, "name": "F1"}, {"type": "accuracy", "value": 0.9499021463633739, "name": "Accuracy"}]}]}]}
dataset
null
3
kunkunhu/craft_mol
kunkunhu
null
[ "region:us" ]
"2025-01-25T15:38:37"
2025-01-26T09:08:28+00:00
0
0
--- {} --- # CRAFT CRAFT: Consistent Representational Fusion of Three Molecular Modalities
[ "CRAFT" ]
Non_BioNLP
# CRAFT CRAFT: Consistent Representational Fusion of Three Molecular Modalities
{}
dataset
null
4
jiey2/DISC-MedLLM
jiey2
text-generation
[ "transformers", "pytorch", "baichuan", "text-generation", "medical", "custom_code", "zh", "dataset:Flmc/DISC-Med-SFT", "arxiv:2308.14346", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
"2023-11-04T10:43:52"
2023-11-04T10:48:48+00:00
16
1
--- datasets: - Flmc/DISC-Med-SFT language: - zh license: apache-2.0 tags: - medical --- This repository contains the DISC-MedLLM, version of Baichuan-13b-base as the base model. **Please note that due to the ongoing development of the project, the model weights in this repository may differ from those in our currently deployed demo.** Check [DISC-MedLLM](https://github.com/FudanDISC/DISC-MedLLM) for more information. # DISC-MedLLM [**Demo**](http://med.fudan-disc.com) | [**Tech Report**](https://arxiv.org/abs/2308.14346) This is the repo of DISC-MedLLM, a medical domain-specific LLM designed for conversational healthcare scenarios by [Fudan-DISC](http://fudan-disc.com) lab. The following resources have been released: * DISC-Med-SFT Dataset (with out behavioral preference dataset) * Model [weights](https://huggingface.co/Flmc/DISC-MedLLM) of DISC-MedLLM You can check this [link](http://medllm.fudan-disc.com) to try our online demo. ## Overview The DISC-MedLLM is a large-scale domain-specific model designed for conversational healthcare scenarios. It can address a variety of your needs, including medical consultations and treatment inquiries, offering you high-quality health support services. The DISC-MedLLM effectively bridges the gap between general language models and real-world medical consultations, as evidenced by experimental results. Owing to our goal-oriented strategy and the framework that integrates both LLM and Human in the loop based on real-world doctor-patient dialogues and knowledge graphs, DISC-MedLLM boasts several features: * **Knowledge-intensive and reliable** * **Ability of multi-turn inquiry** * **Alignment with human preferences** ## Dataset <!-- In order to align the distribution of actual doctor responses with the intended AI doctor response distribution, our dataset is constructed from five main resources: Real-world Conversations (420k), Knowledge Graph-derived Question-Answer pairs (50k), Artificially Annotated Data aligned with human preferences (2k), MedMCQA (8k), and additional general data (34k). --> To train DISC-MedLLM, we construct a high-quality dataset called DISC-Med-SFT consisting of over 470k distinct examples derived from existing medical datasets. We adopt a goal-oriented strategy by selectively reconstructing the dataset using a few deliberately chosen sources. These data sources serve the purpose of assisting LLMs in acquiring medical domain knowledge, aligning behavioral patterns with human preferences, and capturing real-world online medical dialogue distributions. <!-- <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-9wq8{border-color:inherit;text-align:center;vertical-align:middle} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top} </style> --> <table class="tg" style="undefined;table-layout: fixed; width: 442px"> <colgroup> <col style="width: 204.428571px"> <col style="width: 135.428571px"> <col style="width: 102.428571px"> </colgroup> <thead> <tr> <th class="tg-9wq8" rowspan="2"><br>Dateset</th> <th class="tg-9wq8" rowspan="2"><br>Original Source</th> <th class="tg-9wq8" rowspan="2"><br>Size</th> </tr> <tr> </tr> </thead> <tbody> <tr> <td class="tg-9wq8" rowspan="2">Re-constructed AI Doctor-Patient Dialogue</td> <td class="tg-9wq8">MedDialog</td> <td class="tg-9wq8">400k</td> </tr> <tr> <td class="tg-9wq8">cMedQA2</td> <td class="tg-c3ow">20k</td> </tr> <tr> <td class="tg-c3ow">Knowledge Graph <br>QA pairs</td> <td class="tg-9wq8">CMeKG</td> <td class="tg-9wq8">50k</td> </tr> <tr> <td class="tg-c3ow">Behavior Preference<br>Dataset</td> <td class="tg-9wq8">Manual selection</td> <td class="tg-9wq8">2k</td> </tr> <tr> <td class="tg-9wq8" rowspan="3">Others</td> <td class="tg-c3ow">MedMCQA</td> <td class="tg-c3ow">8k</td> </tr> <tr> <td class="tg-c3ow">MOSS-SFT</td> <td class="tg-c3ow">33k</td> </tr> <tr> <td class="tg-c3ow">Alpaca-GPT4-zh</td> <td class="tg-c3ow">1k</td> </tr> </tbody> </table> <br> ## Deploy The current version of DISC-MedLLM is derived from the [Baichuan-13B-Base](https://github.com/baichuan-inc/Baichuan-13B). You can directly download our model weights from the HuggingFace [repository](https://huggingface.co/Flmc/DISC-MedLLM), or automatically obtain them through the demo code. ### Using through hugging face transformers ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> from transformers.generation.utils import GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("Flmc/DISC-MedLLM", use_fast=False, trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("Flmc/DISC-MedLLM", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) >>> model.generation_config = GenerationConfig.from_pretrained("Flmc/DISC-MedLLM") >>> messages = [] >>> messages.append({"role": "user", "content": "我感觉自己颈椎非常不舒服,每天睡醒都会头痛"}) >>> response = model.chat(tokenizer, messages) >>> print(response) ``` Additionally, since the current version uses Baichuan as the base model, you can refer to its [repo](https://github.com/baichuan-inc/Baichuan-13B) for deploying with int8, int4 quantized inference. However, using quantized deployment will result in performance degradation. <br> ## Training You can fine-tuning our model using the data same as our data schema. Our train code is derived from [Firefly](https://github.com/yangjianxin1/Firefly) with the different data schema and dialogue format. We jsut provide the code of Full Params Fine-tuning: ```shell deepspeed --num_gpus={num_gpus} ./train/train.py --train_args_file ./train/train_args/sft.json ``` > Please check the setup of `sft.json` before you attempt to start training. <br>If you want to fine-tuning our model with other training code, please use the following dialogue format. ```shell <\b><$user_token>content<$assistant_token>content<\s><$user_token>content ... ``` The `user_token` and `assistant_token` we used are `195` and `196`, respectly. Which is same as Baichuan-13b-Chat. ## Delcaration Due to the inherent limitations of language models, we cannot assure the accuracy or reliability of information generated by this model. This model is designed exclusively for research and testing by individuals and academic groups. We urge users to critically assess any information or medical advice obtained through the model's output. Blindly trusting or following such information is strongly discouraged. We disclaim responsibility for any issues, risks, or adverse consequences resulting from the model's use. ## Licenses The use of the source code in this repository complies with the Apache 2.0 License. ## Citation ```angular2 @misc{bao2023discmedllm, title={DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation}, author={Zhijie Bao and Wei Chen and Shengze Xiao and Kuang Ren and Jiaao Wu and Cheng Zhong and Jiajie Peng and Xuanjing Huang and Zhongyu Wei}, year={2023}, eprint={2308.14346}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "MEDDIALOG" ]
BioNLP
This repository contains the DISC-MedLLM, version of Baichuan-13b-base as the base model. **Please note that due to the ongoing development of the project, the model weights in this repository may differ from those in our currently deployed demo.** Check [DISC-MedLLM](https://github.com/FudanDISC/DISC-MedLLM) for more information. # DISC-MedLLM [**Demo**](http://med.fudan-disc.com) | [**Tech Report**](https://arxiv.org/abs/2308.14346) This is the repo of DISC-MedLLM, a medical domain-specific LLM designed for conversational healthcare scenarios by [Fudan-DISC](http://fudan-disc.com) lab. The following resources have been released: * DISC-Med-SFT Dataset (with out behavioral preference dataset) * Model [weights](https://huggingface.co/Flmc/DISC-MedLLM) of DISC-MedLLM You can check this [link](http://medllm.fudan-disc.com) to try our online demo. ## Overview The DISC-MedLLM is a large-scale domain-specific model designed for conversational healthcare scenarios. It can address a variety of your needs, including medical consultations and treatment inquiries, offering you high-quality health support services. The DISC-MedLLM effectively bridges the gap between general language models and real-world medical consultations, as evidenced by experimental results. Owing to our goal-oriented strategy and the framework that integrates both LLM and Human in the loop based on real-world doctor-patient dialogues and knowledge graphs, DISC-MedLLM boasts several features: * **Knowledge-intensive and reliable** * **Ability of multi-turn inquiry** * **Alignment with human preferences** ## Dataset <!-- In order to align the distribution of actual doctor responses with the intended AI doctor response distribution, our dataset is constructed from five main resources: Real-world Conversations (420k), Knowledge Graph-derived Question-Answer pairs (50k), Artificially Annotated Data aligned with human preferences (2k), MedMCQA (8k), and additional general data (34k). --> To train DISC-MedLLM, we construct a high-quality dataset called DISC-Med-SFT consisting of over 470k distinct examples derived from existing medical datasets. We adopt a goal-oriented strategy by selectively reconstructing the dataset using a few deliberately chosen sources. These data sources serve the purpose of assisting LLMs in acquiring medical domain knowledge, aligning behavioral patterns with human preferences, and capturing real-world online medical dialogue distributions. <!-- <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-9wq8{border-color:inherit;text-align:center;vertical-align:middle} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top} </style> --> <table class="tg" style="undefined;table-layout: fixed; width: 442px"> <colgroup> <col style="width: 204.428571px"> <col style="width: 135.428571px"> <col style="width: 102.428571px"> </colgroup> <thead> <tr> <th class="tg-9wq8" rowspan="2"><br>Dateset</th> <th class="tg-9wq8" rowspan="2"><br>Original Source</th> <th class="tg-9wq8" rowspan="2"><br>Size</th> </tr> <tr> </tr> </thead> <tbody> <tr> <td class="tg-9wq8" rowspan="2">Re-constructed AI Doctor-Patient Dialogue</td> <td class="tg-9wq8">MedDialog</td> <td class="tg-9wq8">400k</td> </tr> <tr> <td class="tg-9wq8">cMedQA2</td> <td class="tg-c3ow">20k</td> </tr> <tr> <td class="tg-c3ow">Knowledge Graph <br>QA pairs</td> <td class="tg-9wq8">CMeKG</td> <td class="tg-9wq8">50k</td> </tr> <tr> <td class="tg-c3ow">Behavior Preference<br>Dataset</td> <td class="tg-9wq8">Manual selection</td> <td class="tg-9wq8">2k</td> </tr> <tr> <td class="tg-9wq8" rowspan="3">Others</td> <td class="tg-c3ow">MedMCQA</td> <td class="tg-c3ow">8k</td> </tr> <tr> <td class="tg-c3ow">MOSS-SFT</td> <td class="tg-c3ow">33k</td> </tr> <tr> <td class="tg-c3ow">Alpaca-GPT4-zh</td> <td class="tg-c3ow">1k</td> </tr> </tbody> </table> <br> ## Deploy The current version of DISC-MedLLM is derived from the [Baichuan-13B-Base](https://github.com/baichuan-inc/Baichuan-13B). You can directly download our model weights from the HuggingFace [repository](https://huggingface.co/Flmc/DISC-MedLLM), or automatically obtain them through the demo code. ### Using through hugging face transformers ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> from transformers.generation.utils import GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("Flmc/DISC-MedLLM", use_fast=False, trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("Flmc/DISC-MedLLM", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) >>> model.generation_config = GenerationConfig.from_pretrained("Flmc/DISC-MedLLM") >>> messages = [] >>> messages.append({"role": "user", "content": "我感觉自己颈椎非常不舒服,每天睡醒都会头痛"}) >>> response = model.chat(tokenizer, messages) >>> print(response) ``` Additionally, since the current version uses Baichuan as the base model, you can refer to its [repo](https://github.com/baichuan-inc/Baichuan-13B) for deploying with int8, int4 quantized inference. However, using quantized deployment will result in performance degradation. <br> ## Training You can fine-tuning our model using the data same as our data schema. Our train code is derived from [Firefly](https://github.com/yangjianxin1/Firefly) with the different data schema and dialogue format. We jsut provide the code of Full Params Fine-tuning: ```shell deepspeed --num_gpus={num_gpus} ./train/train.py --train_args_file ./train/train_args/sft.json ``` > Please check the setup of `sft.json` before you attempt to start training. <br>If you want to fine-tuning our model with other training code, please use the following dialogue format. ```shell <\b><$user_token>content<$assistant_token>content<\s><$user_token>content ... ``` The `user_token` and `assistant_token` we used are `195` and `196`, respectly. Which is same as Baichuan-13b-Chat. ## Delcaration Due to the inherent limitations of language models, we cannot assure the accuracy or reliability of information generated by this model. This model is designed exclusively for research and testing by individuals and academic groups. We urge users to critically assess any information or medical advice obtained through the model's output. Blindly trusting or following such information is strongly discouraged. We disclaim responsibility for any issues, risks, or adverse consequences resulting from the model's use. ## Licenses The use of the source code in this repository complies with the Apache 2.0 License. ## Citation ```angular2 @misc{bao2023discmedllm, title={DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation}, author={Zhijie Bao and Wei Chen and Shengze Xiao and Kuang Ren and Jiaao Wu and Cheng Zhong and Jiajie Peng and Xuanjing Huang and Zhongyu Wei}, year={2023}, eprint={2308.14346}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"datasets": ["Flmc/DISC-Med-SFT"], "language": ["zh"], "license": "apache-2.0", "tags": ["medical"]}
dataset
null
5
ManoloPueblo/LLM_MERGE_CC4
ManoloPueblo
null
[ "safetensors", "mistral", "merge", "mergekit", "lazymergekit", "llm-merge-cc4", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "license:apache-2.0", "region:us" ]
"2024-11-10T13:55:30"
2024-11-10T14:01:19+00:00
6
1
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - llm-merge-cc4 - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # LLM_MERGE_CC4 LLM_MERGE_CC4 est une fusion des modèles suivants créée par ManoloPueblo utilisant [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration de la fusion ```yaml merge_method: passthrough slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - sources: - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [24, 32] merge_method: passthrough dtype: bfloat16 ``` ## Description LLM_MERGE_CC4 est un modèle de langage créé par la fusion de deux modèles Mistral : 1. OpenPipe/mistral-ft-optimized-1218 - Le modèle de base Llama (modèle de référence) - (layer_range: [0, 32]) 2. mlabonne/NeuralHermes-2.5-Mistral-7B - Version optimisée par OpenPipe (layer_range: [24, 32]) Cette fusion utilise la méthode "passthrough" avec normalisation et une précision float16 pour combiner les forces des deux modèles. ## Architecture Le modèle conserve l'architecture de base de OpenPipe/mistral-ft-optimized-1218 tout en incorporant les améliorations des deux versions à travers une fusion pondérée. ## Paramètres de fusion - Méthode de fusion : passthrough - Normalisation : activée - Type de données : float16 - Densités et poids : * OpenPipe/mistral-ft-optimized-1218 : layer_range: [0, 32] * NeuralHermes-2.5-Mistral-7B : layer_range: [24, 32] ## Utilisation Ce modèle peut être utilisé avec la bibliothèque transformers de Hugging Face : ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ManoloPueblo/LLM_MERGE_CC4") model = AutoModelForCausalLM.from_pretrained("ManoloPueblo/LLM_MERGE_CC4") ``` ## Modèles fusionnés 1. [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) - Modèle de base 2. [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) - Version optimisée ## Limitations Comme pour tout modèle de langage, les utilisateurs doivent être conscients des biais potentiels et des limitations inhérentes aux modèles sources. Les performances peuvent varier selon les cas d'utilisation.
[ "CAS" ]
Non_BioNLP
# LLM_MERGE_CC4 LLM_MERGE_CC4 est une fusion des modèles suivants créée par ManoloPueblo utilisant [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration de la fusion ```yaml merge_method: passthrough slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - sources: - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [24, 32] merge_method: passthrough dtype: bfloat16 ``` ## Description LLM_MERGE_CC4 est un modèle de langage créé par la fusion de deux modèles Mistral : 1. OpenPipe/mistral-ft-optimized-1218 - Le modèle de base Llama (modèle de référence) - (layer_range: [0, 32]) 2. mlabonne/NeuralHermes-2.5-Mistral-7B - Version optimisée par OpenPipe (layer_range: [24, 32]) Cette fusion utilise la méthode "passthrough" avec normalisation et une précision float16 pour combiner les forces des deux modèles. ## Architecture Le modèle conserve l'architecture de base de OpenPipe/mistral-ft-optimized-1218 tout en incorporant les améliorations des deux versions à travers une fusion pondérée. ## Paramètres de fusion - Méthode de fusion : passthrough - Normalisation : activée - Type de données : float16 - Densités et poids : * OpenPipe/mistral-ft-optimized-1218 : layer_range: [0, 32] * NeuralHermes-2.5-Mistral-7B : layer_range: [24, 32] ## Utilisation Ce modèle peut être utilisé avec la bibliothèque transformers de Hugging Face : ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ManoloPueblo/LLM_MERGE_CC4") model = AutoModelForCausalLM.from_pretrained("ManoloPueblo/LLM_MERGE_CC4") ``` ## Modèles fusionnés 1. [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) - Modèle de base 2. [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) - Version optimisée ## Limitations Comme pour tout modèle de langage, les utilisateurs doivent être conscients des biais potentiels et des limitations inhérentes aux modèles sources. Les performances peuvent varier selon les cas d'utilisation.
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "llm-merge-cc4", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B"]}
dataset
null
6
razent/SciFive-large-Pubmed_PMC-MedNLI
razent
text2text-generation
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "mednli", "en", "dataset:pubmed", "dataset:pmc/open_access", "arxiv:2106.03598", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
"2022-03-20T17:24:33"
2022-03-22T04:05:21+00:00
1,302
2
--- datasets: - pubmed - pmc/open_access language: - en tags: - text2text-generation - mednli widget: - text: 'mednli: sentence1: In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA. sentence2: The patient is hemodynamically stable' --- # SciFive Pubmed+PMC Large on MedNLI ## Introduction Finetuned SciFive Pubmed+PMC Large model achieved state-of-the-art results on [MedNLI (Medical Natural Language Inference)](https://paperswithcode.com/sota/natural-language-inference-on-mednli) Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI") model.cuda() ​ sent_1 = "In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA." sent_2 = "The patient is hemodynamically stable" text = f"mednli: sentence1: {sent_1} sentence2: {sent_2}" encoding = tokenizer.encode_plus(text, padding='max_length', max_length=256, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=8, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
[ "MEDNLI" ]
BioNLP
# SciFive Pubmed+PMC Large on MedNLI ## Introduction Finetuned SciFive Pubmed+PMC Large model achieved state-of-the-art results on [MedNLI (Medical Natural Language Inference)](https://paperswithcode.com/sota/natural-language-inference-on-mednli) Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI") model.cuda() ​ sent_1 = "In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA." sent_2 = "The patient is hemodynamically stable" text = f"mednli: sentence1: {sent_1} sentence2: {sent_2}" encoding = tokenizer.encode_plus(text, padding='max_length', max_length=256, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=8, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
{"datasets": ["pubmed", "pmc/open_access"], "language": ["en"], "tags": ["text2text-generation", "mednli"], "widget": [{"text": "mednli: sentence1: In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA. sentence2: The patient is hemodynamically stable"}]}
dataset
null
7
adipanda/makima-simpletuner-lora-2
adipanda
text-to-image
[ "diffusers", "flux", "flux-diffusers", "text-to-image", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "lycoris", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
"2024-10-12T01:00:13"
2024-10-13T19:26:05+00:00
16
0
--- base_model: black-forest-labs/FLUX.1-dev license: other tags: - flux - flux-diffusers - text-to-image - diffusers - simpletuner - safe-for-work - lora - template:sd-lora - lycoris inference: true widget: - text: unconditional (blank prompt) parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_0_0.png - text: A scene from Chainsaw Man. Makima holding a sign that says 'I LOVE PROMPTS!', she is standing full body on a beach at sunset. She is wearing a a white shirt, black tie, and black coat. The setting sun casts a dynamic shadow on her face. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_1_0.png - text: A scene from Chainsaw Man. Makima jumping out of a propeller airplane, sky diving. She looks excited and her hair is blowing in the wind. The sky is clear and blue, there are birds pictured in the distance. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_2_0.png - text: 'A scene from Chainsaw Man. Makima spinning a basketball on her finger on a basketball court. She is wearing a lakers jersey with the #12 on it. The basketball hoop and crowd are in the background cheering for her. She is smiling.' parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_3_0.png - text: A scene from Chainsaw Man. Makima is wearing a suit in an office shaking the hand of a business man. The man has purple hair and is wearing professional attire. There is a Google logo in the background. It is during daytime, and the overall sentiment is one of accomplishment. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_4_0.png - text: A scene from Chainsaw Man. Makima is fighting a large brown grizzly bear, deep in a forest. The bear is tall and standing on two legs, roaring. The bear is also wearing a crown because it is the king of all bears. Around them are tall trees and other animals watching. parameters: negative_prompt: blurry, cropped, ugly output: url: ./assets/image_5_0.png --- # makima-simpletuner-lora-2 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). No validation prompt was used during training. None ## Validation settings - CFG: `3.5` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Resolution: `1024x1024` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 333 - Training steps: 4000 - Learning rate: 0.0003 - Effective batch size: 48 - Micro-batch size: 48 - Gradient accumulation steps: 1 - Number of GPUs: 1 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: Pure BF16 - Quantised: Yes: int8-quanto - Xformers: Not used - LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 12, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 12 }, "FeedForward": { "factor": 6 } } } } ``` ## Datasets ### makima-512 - Repeats: 2 - Total number of images: 172 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights model_id = 'black-forest-labs/FLUX.1-dev' adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer) wrapper.merge_to() prompt = "An astronaut is riding a horse through the jungles of Thailand." pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=3.5, ).images[0] image.save("output.png", format="PNG") ```
[ "BEAR" ]
Non_BioNLP
# makima-simpletuner-lora-2 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). No validation prompt was used during training. None ## Validation settings - CFG: `3.5` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Resolution: `1024x1024` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 333 - Training steps: 4000 - Learning rate: 0.0003 - Effective batch size: 48 - Micro-batch size: 48 - Gradient accumulation steps: 1 - Number of GPUs: 1 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: Pure BF16 - Quantised: Yes: int8-quanto - Xformers: Not used - LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 12, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 12 }, "FeedForward": { "factor": 6 } } } } ``` ## Datasets ### makima-512 - Repeats: 2 - Total number of images: 172 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights model_id = 'black-forest-labs/FLUX.1-dev' adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer) wrapper.merge_to() prompt = "An astronaut is riding a horse through the jungles of Thailand." pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1024, height=1024, guidance_scale=3.5, ).images[0] image.save("output.png", format="PNG") ```
{"base_model": "black-forest-labs/FLUX.1-dev", "license": "other", "tags": ["flux", "flux-diffusers", "text-to-image", "diffusers", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "lycoris"], "inference": true, "widget": [{"text": "unconditional (blank prompt)", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_0_0.png"}}, {"text": "A scene from Chainsaw Man. Makima holding a sign that says 'I LOVE PROMPTS!', she is standing full body on a beach at sunset. She is wearing a a white shirt, black tie, and black coat. The setting sun casts a dynamic shadow on her face.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_1_0.png"}}, {"text": "A scene from Chainsaw Man. Makima jumping out of a propeller airplane, sky diving. She looks excited and her hair is blowing in the wind. The sky is clear and blue, there are birds pictured in the distance.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_2_0.png"}}, {"text": "A scene from Chainsaw Man. Makima spinning a basketball on her finger on a basketball court. She is wearing a lakers jersey with the #12 on it. The basketball hoop and crowd are in the background cheering for her. She is smiling.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_3_0.png"}}, {"text": "A scene from Chainsaw Man. Makima is wearing a suit in an office shaking the hand of a business man. The man has purple hair and is wearing professional attire. There is a Google logo in the background. It is during daytime, and the overall sentiment is one of accomplishment.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_4_0.png"}}, {"text": "A scene from Chainsaw Man. Makima is fighting a large brown grizzly bear, deep in a forest. The bear is tall and standing on two legs, roaring. The bear is also wearing a crown because it is the king of all bears. Around them are tall trees and other animals watching.", "parameters": {"negative_prompt": "blurry, cropped, ugly"}, "output": {"url": "./assets/image_5_0.png"}}]}
dataset
null
8
sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease
sarahmiller137
token-classification
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "named-entity-recognition", "en", "dataset:ncbi_disease", "license:cc", "autotrain_compatible", "endpoints_compatible", "region:us" ]
"2022-08-22T16:06:00"
2023-03-23T15:57:02+00:00
24
0
--- datasets: ncbi_disease language: en license: cc metrics: - precision - recall - f1 - accuracy tags: - named-entity-recognition - token-classification task: - named-entity-recognition - token-classification widget: - text: ' The risk of cancer, especially lymphoid neoplasias, is substantially elevated in A-T patients and has long been associated with chromosomal instability.' --- ## Model information: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext model finetuned using the ncbi_disease dataset from the datasets library. ## Intended uses: This model is intended to be used for named entity recoginition tasks. The model will identify disease entities in text. The model will predict lables based upon the NCBI-disease dataset, please see the dataset information for details. ## Limitations: Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before using the model - - [NCBI Disease](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/pdf/nihms557856.pdf) - [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) ## Widget text: The text displayed in the example widget was taken from one of the ncbi datasets abstracts. ## How to use: Load the model from the library using the following checkpoints: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease") model = AutoModel.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease") ```
[ "NCBI DISEASE" ]
BioNLP
## Model information: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext model finetuned using the ncbi_disease dataset from the datasets library. ## Intended uses: This model is intended to be used for named entity recoginition tasks. The model will identify disease entities in text. The model will predict lables based upon the NCBI-disease dataset, please see the dataset information for details. ## Limitations: Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before using the model - - [NCBI Disease](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/pdf/nihms557856.pdf) - [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) ## Widget text: The text displayed in the example widget was taken from one of the ncbi datasets abstracts. ## How to use: Load the model from the library using the following checkpoints: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease") model = AutoModel.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease") ```
{"datasets": "ncbi_disease", "language": "en", "license": "cc", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["named-entity-recognition", "token-classification"], "task": ["named-entity-recognition", "token-classification"], "widget": [{"text": " The risk of cancer, especially lymphoid neoplasias, is substantially elevated in A-T patients and has long been associated with chromosomal instability."}]}
dataset
null
9
tsavage68/MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO
tsavage68
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
"2024-05-20T07:31:23"
2024-05-23T22:54:22+00:00
5
0
--- base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT license: llama3 tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6020 - Rewards/chosen: 0.7087 - Rewards/rejected: 0.4830 - Rewards/accuracies: 0.7341 - Rewards/margins: 0.2257 - Logps/rejected: -32.2447 - Logps/chosen: -28.9661 - Logits/rejected: -0.7358 - Logits/chosen: -0.7350 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6925 | 0.0489 | 50 | 0.6930 | -0.0016 | -0.0023 | 0.5011 | 0.0007 | -33.8624 | -31.3338 | -0.7320 | -0.7314 | | 0.6841 | 0.0977 | 100 | 0.6807 | 0.2459 | 0.2195 | 0.6549 | 0.0264 | -33.1233 | -30.5088 | -0.7330 | -0.7323 | | 0.6562 | 0.1466 | 150 | 0.6641 | 0.3800 | 0.3137 | 0.6791 | 0.0663 | -32.8092 | -30.0619 | -0.7310 | -0.7303 | | 0.6334 | 0.1954 | 200 | 0.6509 | 0.1334 | 0.0355 | 0.7165 | 0.0979 | -33.7366 | -30.8837 | -0.7311 | -0.7304 | | 0.6544 | 0.2443 | 250 | 0.6415 | 0.2943 | 0.1754 | 0.7209 | 0.1189 | -33.2701 | -30.3474 | -0.7311 | -0.7303 | | 0.6145 | 0.2931 | 300 | 0.6304 | 0.3548 | 0.2099 | 0.7385 | 0.1448 | -33.1550 | -30.1459 | -0.7317 | -0.7310 | | 0.6171 | 0.3420 | 350 | 0.6223 | 0.4756 | 0.3093 | 0.7341 | 0.1663 | -32.8238 | -29.7432 | -0.7336 | -0.7328 | | 0.5911 | 0.3908 | 400 | 0.6181 | 0.6387 | 0.4602 | 0.7121 | 0.1785 | -32.3208 | -29.1996 | -0.7334 | -0.7327 | | 0.5942 | 0.4397 | 450 | 0.6129 | 0.6839 | 0.4904 | 0.7253 | 0.1935 | -32.2203 | -29.0489 | -0.7347 | -0.7339 | | 0.6096 | 0.4885 | 500 | 0.6090 | 0.7785 | 0.5741 | 0.7297 | 0.2044 | -31.9411 | -28.7335 | -0.7351 | -0.7343 | | 0.5671 | 0.5374 | 550 | 0.6068 | 0.7522 | 0.5395 | 0.7275 | 0.2127 | -32.0566 | -28.8212 | -0.7355 | -0.7347 | | 0.6066 | 0.5862 | 600 | 0.6061 | 0.7215 | 0.5067 | 0.7209 | 0.2147 | -32.1657 | -28.9236 | -0.7356 | -0.7348 | | 0.5816 | 0.6351 | 650 | 0.6046 | 0.6882 | 0.4692 | 0.7231 | 0.2191 | -32.2910 | -29.0344 | -0.7356 | -0.7348 | | 0.5968 | 0.6839 | 700 | 0.6030 | 0.6956 | 0.4723 | 0.7451 | 0.2233 | -32.2804 | -29.0097 | -0.7352 | -0.7344 | | 0.6132 | 0.7328 | 750 | 0.6042 | 0.7103 | 0.4891 | 0.7297 | 0.2212 | -32.2246 | -28.9608 | -0.7354 | -0.7346 | | 0.6133 | 0.7816 | 800 | 0.6021 | 0.6956 | 0.4697 | 0.7407 | 0.2258 | -32.2890 | -29.0099 | -0.7358 | -0.7350 | | 0.6397 | 0.8305 | 850 | 0.6029 | 0.7027 | 0.4791 | 0.7341 | 0.2236 | -32.2579 | -28.9862 | -0.7354 | -0.7346 | | 0.6273 | 0.8793 | 900 | 0.6030 | 0.7126 | 0.4896 | 0.7341 | 0.2230 | -32.2229 | -28.9533 | -0.7356 | -0.7348 | | 0.5996 | 0.9282 | 950 | 0.6019 | 0.7087 | 0.4830 | 0.7341 | 0.2257 | -32.2447 | -28.9661 | -0.7358 | -0.7350 | | 0.5319 | 0.9770 | 1000 | 0.6020 | 0.7087 | 0.4830 | 0.7341 | 0.2257 | -32.2447 | -28.9661 | -0.7358 | -0.7350 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
[ "MEDQA" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6020 - Rewards/chosen: 0.7087 - Rewards/rejected: 0.4830 - Rewards/accuracies: 0.7341 - Rewards/margins: 0.2257 - Logps/rejected: -32.2447 - Logps/chosen: -28.9661 - Logits/rejected: -0.7358 - Logits/chosen: -0.7350 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6925 | 0.0489 | 50 | 0.6930 | -0.0016 | -0.0023 | 0.5011 | 0.0007 | -33.8624 | -31.3338 | -0.7320 | -0.7314 | | 0.6841 | 0.0977 | 100 | 0.6807 | 0.2459 | 0.2195 | 0.6549 | 0.0264 | -33.1233 | -30.5088 | -0.7330 | -0.7323 | | 0.6562 | 0.1466 | 150 | 0.6641 | 0.3800 | 0.3137 | 0.6791 | 0.0663 | -32.8092 | -30.0619 | -0.7310 | -0.7303 | | 0.6334 | 0.1954 | 200 | 0.6509 | 0.1334 | 0.0355 | 0.7165 | 0.0979 | -33.7366 | -30.8837 | -0.7311 | -0.7304 | | 0.6544 | 0.2443 | 250 | 0.6415 | 0.2943 | 0.1754 | 0.7209 | 0.1189 | -33.2701 | -30.3474 | -0.7311 | -0.7303 | | 0.6145 | 0.2931 | 300 | 0.6304 | 0.3548 | 0.2099 | 0.7385 | 0.1448 | -33.1550 | -30.1459 | -0.7317 | -0.7310 | | 0.6171 | 0.3420 | 350 | 0.6223 | 0.4756 | 0.3093 | 0.7341 | 0.1663 | -32.8238 | -29.7432 | -0.7336 | -0.7328 | | 0.5911 | 0.3908 | 400 | 0.6181 | 0.6387 | 0.4602 | 0.7121 | 0.1785 | -32.3208 | -29.1996 | -0.7334 | -0.7327 | | 0.5942 | 0.4397 | 450 | 0.6129 | 0.6839 | 0.4904 | 0.7253 | 0.1935 | -32.2203 | -29.0489 | -0.7347 | -0.7339 | | 0.6096 | 0.4885 | 500 | 0.6090 | 0.7785 | 0.5741 | 0.7297 | 0.2044 | -31.9411 | -28.7335 | -0.7351 | -0.7343 | | 0.5671 | 0.5374 | 550 | 0.6068 | 0.7522 | 0.5395 | 0.7275 | 0.2127 | -32.0566 | -28.8212 | -0.7355 | -0.7347 | | 0.6066 | 0.5862 | 600 | 0.6061 | 0.7215 | 0.5067 | 0.7209 | 0.2147 | -32.1657 | -28.9236 | -0.7356 | -0.7348 | | 0.5816 | 0.6351 | 650 | 0.6046 | 0.6882 | 0.4692 | 0.7231 | 0.2191 | -32.2910 | -29.0344 | -0.7356 | -0.7348 | | 0.5968 | 0.6839 | 700 | 0.6030 | 0.6956 | 0.4723 | 0.7451 | 0.2233 | -32.2804 | -29.0097 | -0.7352 | -0.7344 | | 0.6132 | 0.7328 | 750 | 0.6042 | 0.7103 | 0.4891 | 0.7297 | 0.2212 | -32.2246 | -28.9608 | -0.7354 | -0.7346 | | 0.6133 | 0.7816 | 800 | 0.6021 | 0.6956 | 0.4697 | 0.7407 | 0.2258 | -32.2890 | -29.0099 | -0.7358 | -0.7350 | | 0.6397 | 0.8305 | 850 | 0.6029 | 0.7027 | 0.4791 | 0.7341 | 0.2236 | -32.2579 | -28.9862 | -0.7354 | -0.7346 | | 0.6273 | 0.8793 | 900 | 0.6030 | 0.7126 | 0.4896 | 0.7341 | 0.2230 | -32.2229 | -28.9533 | -0.7356 | -0.7348 | | 0.5996 | 0.9282 | 950 | 0.6019 | 0.7087 | 0.4830 | 0.7341 | 0.2257 | -32.2447 | -28.9661 | -0.7358 | -0.7350 | | 0.5319 | 0.9770 | 1000 | 0.6020 | 0.7087 | 0.4830 | 0.7341 | 0.2257 | -32.2447 | -28.9661 | -0.7358 | -0.7350 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license": "llama3", "tags": ["trl", "dpo", "generated_from_trainer"], "model-index": [{"name": "MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO", "results": []}]}
dataset
null
10
mradermacher/Llama-3-VNTL-Vectors-i1-GGUF
mradermacher
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Cas-Warehouse/Llama-3-VNTL-Vectors", "base_model:quantized:Cas-Warehouse/Llama-3-VNTL-Vectors", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
"2025-03-08T23:07:11"
2025-03-09T01:00:08+00:00
589
0
--- base_model: Cas-Warehouse/Llama-3-VNTL-Vectors language: - en library_name: transformers tags: - mergekit - merge quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Cas-Warehouse/Llama-3-VNTL-Vectors <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
[ "CAS" ]
Non_BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Cas-Warehouse/Llama-3-VNTL-Vectors <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-VNTL-Vectors-i1-GGUF/resolve/main/Llama-3-VNTL-Vectors.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
{"base_model": "Cas-Warehouse/Llama-3-VNTL-Vectors", "language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "quantized_by": "mradermacher"}
dataset
null
11
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
76