Model Details
Model Description: This is a finance-domain pretrained Chinese language model, which is based on the 125-million-parameter RoBERTa-Base and further pre-trained on 32B tokens of Chinese financial corpora (including a large number of research reports, news, and announcements).
- Developed by: See valuesimplex for model developers
- Model Type: Transformer-based language model
- Language(s): Chinese
- Parent Model: See the chinese-roberta for more information about the BERT base model.
- Resources for more information:
Direct Use
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("valuesimplex-ai-lab/FinBERT2-base")
tokenizer = AutoTokenizer.from_pretrained("valuesimplex-ai-lab/FinBERT2-base")
Further Usage
continual pre-training or fine-tuning:https://github.com/valuesimplex/FinBERT
- Downloads last month
- 75
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for valuesimplex-ai-lab/FinBERT2-base
Base model
hfl/chinese-roberta-wwm-ext