Model Card for Model ID

The OE-RoBERTa model is domain adapted from RoBERTa-base over research literature in optoelectronics. The adapted model is then fine-tuned on SQuAD v1.1 for question answering capabilities.

Model Details

Model Description

  • Language(s) (NLP): English
  • Adapted from model: FacebookAI/roberta-base

Model Sources

Uses

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("question-answering", model="Dingyun-Huang/oe-roberta-base-squad1")

Citation

BibTeX:

@article{doi:10.1021/acs.jcim.4c02029,
author = {Huang, Dingyun and Cole, Jacqueline M.},
title = {Cost-Efficient Domain-Adaptive Pretraining of Language Models for Optoelectronics Applications},
journal = {Journal of Chemical Information and Modeling},
volume = {65},
number = {5},
pages = {2476-2486},
year = {2025},
doi = {10.1021/acs.jcim.4c02029},
    note ={PMID: 39933074},

URL = { 
    
        https://doi.org/10.1021/acs.jcim.4c02029
    
    

},
eprint = { 
    
        https://doi.org/10.1021/acs.jcim.4c02029
    
    

}

}
Downloads last month
10
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support

Model tree for Dingyun-Huang/oe-roberta-base-squad1

Finetuned
(1752)
this model

Dataset used to train Dingyun-Huang/oe-roberta-base-squad1