Model Card for Model ID
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Gikebe
- Model type: BERT-based sequence classification model for inclusion-related text classification
- Language(s) (NLP): English
- License: MIT
- Finetuned from model [optional]: bert-base-uncased
Model Sources [optional]
- Repository: https://huggingface.co/gikebe/inclusion-model/
Uses
Direct Use
This model can be used to classify text related to inclusion, diversity, and social justice topics into different categories such as "Inclusion Mindset," "Intersectionality," "Empowerment," "Privilege," and "Perfectionism."
Out-of-Scope Use
The model is not suitable for:
-Classification tasks outside of the diversity and inclusion domain. -Use cases where highly nuanced or sensitive topics require additional layers of ethical consideration.
Bias, Risks, and Limitations
-The model may reflect biases present in the underlying training data. -As it is trained on a specific set of texts, it may not generalize well to all contexts related to inclusion and diversity. -The model could misinterpret or misclassify content in languages other than English or in cultural contexts it wasn’t trained on.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import pipeline
classifier = pipeline("text-classification", model="gikebe/inclusion-dataset")
result = classifier("Women of color face unique challenges that are often overlooked in diversity discussions.")
print(result)
Training Details
Training Data
The model was trained on a custom dataset of inclusion-related texts, derived from books such as Inclusion on Purpose by Ruchika Tulshyan, The Memo by Minda Harts, and others.
Training Procedure
The training procedure involved fine-tuning the BERT-based model (bert-base-uncased) for text classification.
Preprocessing
Text was tokenized using the BertTokenizer with maximum sequence length truncation.
Training Hyperparameters
-Epochs: 3 -Batch size: 8 -Optimizer: AdamW -Learning rate: 5e-5
Evaluation
The model was evaluated on the same dataset split into training and testing sets.
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
Relevant factors include the context and specificity of the inclusion-related texts.
Metrics
Evaluation is yet to be done
Results
Evaluation is yet to be done
Summary
Model Examination
This model is based on the BERT architecture and fine-tuned for sequence classification with the objective of predicting categories related to inclusion and diversity.
Technical Specifications
The model was trained using cloud-based GPU resources.
Citation
BibTeX:
@misc{gikebe_inclusion_2024,
author = {Gikebe, [Your Name]},
title = {Inclusion Model},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/gikebe/inclusion-dataset}},
}
APA:
Gikebe. (2024). Inclusion Model. Hugging Face. Retrieved from https://huggingface.co/gikebe/inclusion-dataset
- Downloads last month
- 8
Model tree for gikebe/inclusion-model
Base model
google-bert/bert-base-uncased