Edit model card

Model Card for Hello-SimpleAI/chatgpt-qa-detector-roberta

This model is trained on question-answer pairs of the filtered full-text from Hello-SimpleAI/HC3.

More details refer to arxiv: 2301.07597 and Gtihub project Hello-SimpleAI/chatgpt-comparison-detection.

The base checkpoint is roberta-base. We train it with all Hello-SimpleAI/HC3 data (without held-out) for 1 epoch.

(1-epoch is consistent with the experiments in our paper.)

Citation

Checkout this papaer arxiv: 2301.07597

@article{guo-etal-2023-hc3,
    title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
    author = "Guo, Biyang  and
      Zhang, Xin  and
      Wang, Ziyuan  and
      Jiang, Minqi  and
      Nie, Jinran  and
      Ding, Yuxuan  and
      Yue, Jianwei  and
      Wu, Yupeng",
    journal={arXiv preprint arxiv:2301.07597}
    year = "2023",
}
Downloads last month
783
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Hello-SimpleAI/chatgpt-qa-detector-roberta

Space using Hello-SimpleAI/chatgpt-qa-detector-roberta 1