English

GEM_PubMedQA Model Card

This model card provides an overview of the GEM_PubMedQA model, a finetuned implementation of the GEM architecture designed for the PubMedQA dataset.

Purpose

The GEM_PubMedQA model was developed to assess the performance of the GEM architecture on domain-specific datasets, with a focus on healthcare. The PubMedQA dataset, a key benchmark in this field, was selected to evaluate its effectiveness.

Key Details

  • License: Apache-2.0
  • Dataset: qiaojin/PubMedQA
  • Language: English
  • Metrics: Accuracy: 92.5%
  • Base Model: google-bert/bert-base-uncased

Model Details

The GEM_PubMedQA model is built on the GEM architecture and finetuned from the google-bert/bert-base-uncased model using the PubMedQA dataset. The training was performed with the following parameters:

  • Number of epochs: 5
  • Batch size: 128
  • Learning rate: 2e-5
  • Maximum sequence length: 128
  • Gradient accumulation steps: 2
  • Cluster size: 256
  • Threshold: 0.65
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GEM025/GEM_PubMedQA

Finetuned
(3958)
this model

Dataset used to train GEM025/GEM_PubMedQA