---
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:100
- loss:BinaryCrossEntropyLoss
base_model: cross-encoder/ms-marco-MiniLM-L4-v2
pipeline_tag: text-ranking
library_name: sentence-transformers
---
# CrossEncoder based on cross-encoder/ms-marco-MiniLM-L4-v2
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L4-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L4-v2) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [cross-encoder/ms-marco-MiniLM-L4-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L4-v2)
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("clturner23/cross_encoder_trained_model")
# Get scores for pairs of texts
pairs = [
['accountant', 'Graphic designer creating logos.'],
['software engineer', 'UX designer creating user interfaces.'],
['accountant', 'Payroll clerk processing salaries.'],
['architect', 'Structural engineer analyzing designs.'],
['software engineer', 'Database administrator optimizing SQL queries.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'accountant',
[
'Graphic designer creating logos.',
'UX designer creating user interfaces.',
'Payroll clerk processing salaries.',
'Structural engineer analyzing designs.',
'Database administrator optimizing SQL queries.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 100 training samples
* Columns: sentence_0
, sentence_1
, and label
* Approximate statistics based on the first 100 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:--------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details |
accountant
| Graphic designer creating logos.
| 0.0
|
| software engineer
| UX designer creating user interfaces.
| 0.0
|
| accountant
| Payroll clerk processing salaries.
| 1.0
|
* Loss: [BinaryCrossEntropyLoss
](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `num_train_epochs`: 10
#### All Hyperparameters