Edit model card

Model Card for dfurman/phi-2-scientific-papers-base-v0.1

A base model for scientific papers trained on 70MB (txt file) of research literature.

Model Details

Model Description

  • Developed by: Daniel Furman
  • Model type: Phi-2
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model: microsoft/phi-2

Uses

The intended use of this model includes scientific paper next word prediction. It is a base model for the scientific research domain.

Direct Use

Use for document completion on scientific papers.

Downstream Use

Finetune for other tasks in scientific literature domain, like Q&A on scientific papers.

Out-of-Scope Use

Anything outside of scientific research adjacent NLP tasks.

Bias, Risks, and Limitations

No guardrails are baked into this model. Use at your own risk.

Compute Info

This model was fine-tuned using the accelerate package on a cluster from RunPod with 4x A100-SXM4-80GB GPUs (99% memory usage across each during training).

Downloads last month
50
Safetensors
Model size
2.78B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dfurman/phi-2-scientific-papers-base-v0.1

Base model

microsoft/phi-2
Finetuned
(281)
this model