---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:35520
- loss:MultipleNegativesRankingLoss
base_model: DeepChem/ChemBERTa-77M-MLM
widget:
- source_sentence: C[NH+]1CCC(CN2c3ccccc3Sc3ccccc32)C1
sentences:
- CC(C)CN(CC(O)C(Cc1ccccc1)NC(=O)OC1COC2OCCC12)S(=O)(=O)c1ccc(N)cc1
- COC(=O)NC(C(=O)NC(Cc1ccccc1)C(O)CN(Cc1ccc(-c2ccccn2)cc1)NC(=O)C(NC(=O)OC)C(C)(C)C)C(C)(C)C
- C=C1c2cccc([O-])c2C(=O)C2=C([O-])C3(O)C(=O)C(C(N)=O)=C([O-])C([NH+](C)C)C3C(O)C12
- source_sentence: CC(C)(C)[NH2+]CC(O)COc1ccccc1C1CCCC1
sentences:
- C[NH2+]C1C(OC2C(OC3C(O)C(O)C(NC(N)=[NH2+])C(O)C3NC(N)=[NH2+])OC(C)C2(O)C=O)OC(CO)C(O)C1O
- CC(C)CNCc1ccc(-c2ccccc2S(=O)(=O)N2CCCC2)cc1
- CC1C[NH+](CC(Cc2ccccc2)C(=O)NCC(=O)[O-])CCC1(C)c1cccc(O)c1
- source_sentence: CC1CC2C3CCC4=CC(=O)C=CC4(C)C3(F)C(O)CC2(C)C1(OC(=O)c1ccccc1)C(=O)CO
sentences:
- CC1CC=CC=CC=CC=CC(OC2OC(C)C(O)C([NH3+])C2O)CC2OC(O)(CC(O)CC3OC3C=CC(=O)O1)CC(O)C2C(=O)[O-]
- C=CC1(C)CC(OC(=O)CSC2CC3CCC(C2)[NH+]3C)C2(C)C(C)CCC3(CCC(=O)C32)C(C)C1O
- CC(C)C(CN1CCC(C)(c2cccc(O)c2)C(C)C1)NC(=O)C1Cc2ccc(O)cc2CN1
- source_sentence: CC(C)[NH2+]CC1CCc2cc(CO)c([N+](=O)[O-])cc2N1
sentences:
- CC(Cc1cc2c(c(C(N)=O)c1)N(CCCO)CC2)[NH2+]CCOc1ccccc1OCC(F)(F)F
- COC(=O)NC(C(=O)NC(Cc1ccccc1)C(O)CN(Cc1ccc(-c2ccccn2)cc1)NC(=O)C(NC(=O)OC)C(C)(C)C)C(C)(C)C
- COc1ccccc1Oc1c([N-]S(=O)(=O)c2ccc(C(C)(C)C)cc2)nc(-c2ncccn2)nc1OCCO
- source_sentence: COc1ccc(C(=O)CC(=O)c2ccc(C(C)(C)C)cc2)cc1
sentences:
- C[N+]1(C)CCC(=C(c2ccccc2)c2ccccc2)CC1
- CC#CCC(C)C(O)C=CC1C(O)CC2CC(=CCCCC(=O)[O-])CC21
- C=C1CC2CCC34CC5OC6C(OC7CCC(CC(=O)CC8C(CC9OC(CCC1O2)CC(C)C9=C)OC(CC(O)CN)C8OC)OC7C6O3)C5O4
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on DeepChem/ChemBERTa-77M-MLM
results:
- task:
type: triplet
name: Triplet
dataset:
name: all dev
type: all-dev
metrics:
- type: cosine_accuracy
value: 0.7844594594594595
name: Cosine Accuracy
---
# SentenceTransformer based on DeepChem/ChemBERTa-77M-MLM
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [DeepChem/ChemBERTa-77M-MLM](https://huggingface.co/DeepChem/ChemBERTa-77M-MLM). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [DeepChem/ChemBERTa-77M-MLM](https://huggingface.co/DeepChem/ChemBERTa-77M-MLM)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("HassanCS/chemBERTa-tuned-on-ClinTox-4")
# Run inference
sentences = [
'COc1ccc(C(=O)CC(=O)c2ccc(C(C)(C)C)cc2)cc1',
'C[N+]1(C)CCC(=C(c2ccccc2)c2ccccc2)CC1',
'C=C1CC2CCC34CC5OC6C(OC7CCC(CC(=O)CC8C(CC9OC(CCC1O2)CC(C)C9=C)OC(CC(O)CN)C8OC)OC7C6O3)C5O4',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Triplet
* Dataset: `all-dev`
* Evaluated with [TripletEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.7845** |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 35,520 training samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details |
CC(C)CC(NC(=O)CNC(=O)c1cc(Cl)ccc1Cl)B(O)O
| CC(=O)OC1CCC2(C)C(=CCC3C2CCC2(C)C(c4cccnc4)=CCC32)C1
| CCOC(=O)c1ncn2c1CN(C)C(=O)c1cc(F)ccc1-2
|
| CC(C)CC(NC(=O)CNC(=O)c1cc(Cl)ccc1Cl)B(O)O
| COc1ccc(C(CN(C)C)C2(O)CCCCC2)cc1
| C[NH2+]C1(C)C2CCC(C2)C1(C)C
|
| CC(C)CC(NC(=O)CNC(=O)c1cc(Cl)ccc1Cl)B(O)O
| CNC(=O)c1cc(Oc2ccc(NC(=O)Nc3ccc(Cl)c(C(F)(F)F)c3)cc2)ccn1.Cc1ccc(S(=O)(=O)O)cc1
| Nc1ncnc2c1ncn2C1OC(CO)C(O)C1O
|
* Loss: [MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 1,480 evaluation samples
* Columns: anchor
, positive
, and negative
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | CC(C)OC(=O)CCCC=CCC1C(O)CC(O)C1C=CC(O)COc1cccc(C(F)(F)F)c1
| C#CC1(O)CCC2C3CCC4=C(CCC(=O)C4)C3CCC21C
| CC(C)CC(NC(=O)C(CCc1ccccc1)NC(=O)CN1CCOCC1)C(=O)NC(Cc1ccccc1)C(=O)NC(CC(C)C)C(=O)C1(C)CO1
|
| CC(C)OC(=O)CCCC=CCC1C(O)CC(O)C1C=CC(O)COc1cccc(C(F)(F)F)c1
| C=CC1(C)CC(OC(=O)CSC2CC3CCC(C2)[NH+]3C)C2(C)C(C)CCC3(CCC(=O)C32)C(C)C1O
| COC(=O)NC(C(=O)NC(Cc1ccccc1)C(O)CN(Cc1ccc(-c2ccccn2)cc1)NC(=O)C(NC(=O)OC)C(C)(C)C)C(C)(C)C
|
| CC(C)OC(=O)CCCC=CCC1C(O)CC(O)C1C=CC(O)COc1cccc(C(F)(F)F)c1
| CC(Cc1cc2c(c(C(N)=O)c1)N(CCCO)CC2)[NH2+]CCOc1ccccc1OCC(F)(F)F
| CC(C)C1(C(=O)NC2CC(=O)OC2(O)CF)CC(c2nccc3ccccc23)=NO1
|
* Loss: [MultipleNegativesRankingLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters